00:00:00.001 Started by upstream project "autotest-per-patch" build number 126149 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.067 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.068 The recommended git tool is: git 00:00:00.068 using credential 00000000-0000-0000-0000-000000000002 00:00:00.071 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.105 Fetching changes from the remote Git repository 00:00:00.108 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.144 Using shallow fetch with depth 1 00:00:00.144 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.144 > git --version # timeout=10 00:00:00.172 > git --version # 'git version 2.39.2' 00:00:00.172 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.191 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.191 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.578 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.589 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.600 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:03.600 > git config core.sparsecheckout # timeout=10 00:00:03.610 > git read-tree -mu HEAD # timeout=10 00:00:03.625 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:03.641 Commit message: "inventory: add WCP3 to free inventory" 00:00:03.641 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:03.728 [Pipeline] Start of Pipeline 00:00:03.743 [Pipeline] library 00:00:03.745 Loading library shm_lib@master 00:00:03.745 Library shm_lib@master is cached. Copying from home. 00:00:03.766 [Pipeline] node 00:00:03.776 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.780 [Pipeline] { 00:00:03.790 [Pipeline] catchError 00:00:03.792 [Pipeline] { 00:00:03.802 [Pipeline] wrap 00:00:03.810 [Pipeline] { 00:00:03.816 [Pipeline] stage 00:00:03.817 [Pipeline] { (Prologue) 00:00:04.023 [Pipeline] sh 00:00:04.308 + logger -p user.info -t JENKINS-CI 00:00:04.325 [Pipeline] echo 00:00:04.327 Node: WFP8 00:00:04.336 [Pipeline] sh 00:00:04.635 [Pipeline] setCustomBuildProperty 00:00:04.650 [Pipeline] echo 00:00:04.651 Cleanup processes 00:00:04.658 [Pipeline] sh 00:00:04.941 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.941 2959714 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.954 [Pipeline] sh 00:00:05.236 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.236 ++ grep -v 'sudo pgrep' 00:00:05.236 ++ awk '{print $1}' 00:00:05.236 + sudo kill -9 00:00:05.236 + true 00:00:05.250 [Pipeline] cleanWs 00:00:05.261 [WS-CLEANUP] Deleting project workspace... 00:00:05.261 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.268 [WS-CLEANUP] done 00:00:05.274 [Pipeline] setCustomBuildProperty 00:00:05.290 [Pipeline] sh 00:00:05.567 + sudo git config --global --replace-all safe.directory '*' 00:00:05.647 [Pipeline] httpRequest 00:00:05.666 [Pipeline] echo 00:00:05.667 Sorcerer 10.211.164.101 is alive 00:00:05.673 [Pipeline] httpRequest 00:00:05.677 HttpMethod: GET 00:00:05.677 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.678 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.689 Response Code: HTTP/1.1 200 OK 00:00:05.690 Success: Status code 200 is in the accepted range: 200,404 00:00:05.690 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.765 [Pipeline] sh 00:00:07.044 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.060 [Pipeline] httpRequest 00:00:07.085 [Pipeline] echo 00:00:07.086 Sorcerer 10.211.164.101 is alive 00:00:07.093 [Pipeline] httpRequest 00:00:07.097 HttpMethod: GET 00:00:07.097 URL: http://10.211.164.101/packages/spdk_897e912d5ef39b95adea4d69f24b5af81e596e94.tar.gz 00:00:07.098 Sending request to url: http://10.211.164.101/packages/spdk_897e912d5ef39b95adea4d69f24b5af81e596e94.tar.gz 00:00:07.118 Response Code: HTTP/1.1 200 OK 00:00:07.119 Success: Status code 200 is in the accepted range: 200,404 00:00:07.119 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_897e912d5ef39b95adea4d69f24b5af81e596e94.tar.gz 00:01:05.390 [Pipeline] sh 00:01:05.678 + tar --no-same-owner -xf spdk_897e912d5ef39b95adea4d69f24b5af81e596e94.tar.gz 00:01:08.224 [Pipeline] sh 00:01:08.558 + git -C spdk log --oneline -n5 00:01:08.558 897e912d5 lib/ublk: wait and retry before starting USER RECOVERY 00:01:08.558 719d03c6a sock/uring: only register net impl if supported 00:01:08.558 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:08.558 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:08.558 6c7c1f57e accel: add sequence outstanding stat 00:01:08.570 [Pipeline] } 00:01:08.590 [Pipeline] // stage 00:01:08.598 [Pipeline] stage 00:01:08.601 [Pipeline] { (Prepare) 00:01:08.617 [Pipeline] writeFile 00:01:08.632 [Pipeline] sh 00:01:08.913 + logger -p user.info -t JENKINS-CI 00:01:08.925 [Pipeline] sh 00:01:09.208 + logger -p user.info -t JENKINS-CI 00:01:09.221 [Pipeline] sh 00:01:09.503 + cat autorun-spdk.conf 00:01:09.503 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.503 SPDK_TEST_NVMF=1 00:01:09.503 SPDK_TEST_NVME_CLI=1 00:01:09.503 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.503 SPDK_TEST_NVMF_NICS=e810 00:01:09.503 SPDK_TEST_VFIOUSER=1 00:01:09.503 SPDK_RUN_UBSAN=1 00:01:09.503 NET_TYPE=phy 00:01:09.510 RUN_NIGHTLY=0 00:01:09.518 [Pipeline] readFile 00:01:09.550 [Pipeline] withEnv 00:01:09.552 [Pipeline] { 00:01:09.568 [Pipeline] sh 00:01:09.855 + set -ex 00:01:09.855 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:09.855 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:09.855 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.855 ++ SPDK_TEST_NVMF=1 00:01:09.855 ++ SPDK_TEST_NVME_CLI=1 00:01:09.855 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.855 ++ SPDK_TEST_NVMF_NICS=e810 00:01:09.855 ++ SPDK_TEST_VFIOUSER=1 00:01:09.855 ++ SPDK_RUN_UBSAN=1 00:01:09.855 ++ NET_TYPE=phy 00:01:09.855 ++ RUN_NIGHTLY=0 00:01:09.855 + case $SPDK_TEST_NVMF_NICS in 00:01:09.855 + DRIVERS=ice 00:01:09.855 + [[ tcp == \r\d\m\a ]] 00:01:09.855 + [[ -n ice ]] 00:01:09.855 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:09.855 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:13.149 rmmod: ERROR: Module irdma is not currently loaded 00:01:13.149 rmmod: ERROR: Module i40iw is not currently loaded 00:01:13.149 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:13.149 + true 00:01:13.149 + for D in $DRIVERS 00:01:13.149 + sudo modprobe ice 00:01:13.149 + exit 0 00:01:13.159 [Pipeline] } 00:01:13.175 [Pipeline] // withEnv 00:01:13.180 [Pipeline] } 00:01:13.196 [Pipeline] // stage 00:01:13.208 [Pipeline] catchError 00:01:13.210 [Pipeline] { 00:01:13.226 [Pipeline] timeout 00:01:13.226 Timeout set to expire in 50 min 00:01:13.228 [Pipeline] { 00:01:13.244 [Pipeline] stage 00:01:13.246 [Pipeline] { (Tests) 00:01:13.263 [Pipeline] sh 00:01:13.544 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.545 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.545 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.545 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:13.545 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:13.545 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:13.545 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:13.545 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:13.545 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:13.545 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:13.545 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:13.545 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.545 + source /etc/os-release 00:01:13.545 ++ NAME='Fedora Linux' 00:01:13.545 ++ VERSION='38 (Cloud Edition)' 00:01:13.545 ++ ID=fedora 00:01:13.545 ++ VERSION_ID=38 00:01:13.545 ++ VERSION_CODENAME= 00:01:13.545 ++ PLATFORM_ID=platform:f38 00:01:13.545 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:13.545 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:13.545 ++ LOGO=fedora-logo-icon 00:01:13.545 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:13.545 ++ HOME_URL=https://fedoraproject.org/ 00:01:13.545 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:13.545 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:13.545 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:13.545 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:13.545 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:13.545 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:13.545 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:13.545 ++ SUPPORT_END=2024-05-14 00:01:13.545 ++ VARIANT='Cloud Edition' 00:01:13.545 ++ VARIANT_ID=cloud 00:01:13.545 + uname -a 00:01:13.545 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:13.545 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:16.079 Hugepages 00:01:16.079 node hugesize free / total 00:01:16.079 node0 1048576kB 0 / 0 00:01:16.079 node0 2048kB 2048 / 2048 00:01:16.079 node1 1048576kB 0 / 0 00:01:16.079 node1 2048kB 0 / 0 00:01:16.079 00:01:16.079 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:16.079 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:16.079 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:16.079 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:16.079 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:16.079 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:16.079 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:16.079 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:16.079 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:16.079 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:16.079 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:16.079 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:16.079 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:16.079 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:16.079 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:16.079 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:16.079 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:16.079 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:16.079 + rm -f /tmp/spdk-ld-path 00:01:16.079 + source autorun-spdk.conf 00:01:16.079 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.079 ++ SPDK_TEST_NVMF=1 00:01:16.079 ++ SPDK_TEST_NVME_CLI=1 00:01:16.079 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.079 ++ SPDK_TEST_NVMF_NICS=e810 00:01:16.079 ++ SPDK_TEST_VFIOUSER=1 00:01:16.079 ++ SPDK_RUN_UBSAN=1 00:01:16.079 ++ NET_TYPE=phy 00:01:16.079 ++ RUN_NIGHTLY=0 00:01:16.079 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:16.079 + [[ -n '' ]] 00:01:16.079 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:16.079 + for M in /var/spdk/build-*-manifest.txt 00:01:16.079 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:16.079 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:16.079 + for M in /var/spdk/build-*-manifest.txt 00:01:16.079 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:16.079 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:16.079 ++ uname 00:01:16.079 + [[ Linux == \L\i\n\u\x ]] 00:01:16.079 + sudo dmesg -T 00:01:16.079 + sudo dmesg --clear 00:01:16.338 + dmesg_pid=2961154 00:01:16.338 + [[ Fedora Linux == FreeBSD ]] 00:01:16.338 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.338 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.338 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:16.338 + [[ -x /usr/src/fio-static/fio ]] 00:01:16.338 + export FIO_BIN=/usr/src/fio-static/fio 00:01:16.338 + FIO_BIN=/usr/src/fio-static/fio 00:01:16.338 + sudo dmesg -Tw 00:01:16.338 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:16.338 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:16.338 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:16.338 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.338 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.338 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:16.338 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.338 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.338 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:16.338 Test configuration: 00:01:16.338 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.338 SPDK_TEST_NVMF=1 00:01:16.338 SPDK_TEST_NVME_CLI=1 00:01:16.338 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.338 SPDK_TEST_NVMF_NICS=e810 00:01:16.338 SPDK_TEST_VFIOUSER=1 00:01:16.338 SPDK_RUN_UBSAN=1 00:01:16.338 NET_TYPE=phy 00:01:16.338 RUN_NIGHTLY=0 07:37:00 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:16.338 07:37:00 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:16.338 07:37:00 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:16.338 07:37:00 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:16.338 07:37:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.338 07:37:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.338 07:37:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.338 07:37:00 -- paths/export.sh@5 -- $ export PATH 00:01:16.338 07:37:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.338 07:37:00 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:16.338 07:37:00 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:16.338 07:37:00 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721021820.XXXXXX 00:01:16.338 07:37:00 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721021820.YhBgbK 00:01:16.338 07:37:00 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:16.338 07:37:00 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:16.338 07:37:00 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:16.338 07:37:00 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:16.338 07:37:00 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:16.338 07:37:00 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:16.338 07:37:00 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:16.338 07:37:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.338 07:37:01 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:16.338 07:37:01 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:16.338 07:37:01 -- pm/common@17 -- $ local monitor 00:01:16.338 07:37:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.338 07:37:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.338 07:37:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.338 07:37:01 -- pm/common@21 -- $ date +%s 00:01:16.338 07:37:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.338 07:37:01 -- pm/common@21 -- $ date +%s 00:01:16.338 07:37:01 -- pm/common@25 -- $ sleep 1 00:01:16.338 07:37:01 -- pm/common@21 -- $ date +%s 00:01:16.338 07:37:01 -- pm/common@21 -- $ date +%s 00:01:16.338 07:37:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721021821 00:01:16.338 07:37:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721021821 00:01:16.338 07:37:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721021821 00:01:16.338 07:37:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721021821 00:01:16.338 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721021821_collect-vmstat.pm.log 00:01:16.338 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721021821_collect-cpu-load.pm.log 00:01:16.338 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721021821_collect-cpu-temp.pm.log 00:01:16.338 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721021821_collect-bmc-pm.bmc.pm.log 00:01:17.277 07:37:02 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:17.277 07:37:02 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:17.277 07:37:02 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:17.277 07:37:02 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:17.277 07:37:02 -- spdk/autobuild.sh@16 -- $ date -u 00:01:17.277 Mon Jul 15 05:37:02 AM UTC 2024 00:01:17.277 07:37:02 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:17.536 v24.09-pre-203-g897e912d5 00:01:17.536 07:37:02 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:17.536 07:37:02 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:17.536 07:37:02 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:17.536 07:37:02 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:17.536 07:37:02 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:17.536 07:37:02 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.536 ************************************ 00:01:17.536 START TEST ubsan 00:01:17.536 ************************************ 00:01:17.536 07:37:02 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:17.536 using ubsan 00:01:17.536 00:01:17.536 real 0m0.000s 00:01:17.536 user 0m0.000s 00:01:17.536 sys 0m0.000s 00:01:17.536 07:37:02 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:17.536 07:37:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:17.536 ************************************ 00:01:17.536 END TEST ubsan 00:01:17.536 ************************************ 00:01:17.536 07:37:02 -- common/autotest_common.sh@1142 -- $ return 0 00:01:17.536 07:37:02 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:17.536 07:37:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:17.536 07:37:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:17.536 07:37:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:17.536 07:37:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:17.536 07:37:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:17.536 07:37:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:17.536 07:37:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:17.536 07:37:02 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:17.536 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:17.536 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:18.106 Using 'verbs' RDMA provider 00:01:30.889 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:43.160 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:43.160 Creating mk/config.mk...done. 00:01:43.160 Creating mk/cc.flags.mk...done. 00:01:43.160 Type 'make' to build. 00:01:43.160 07:37:27 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:43.160 07:37:27 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:43.160 07:37:27 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:43.160 07:37:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.160 ************************************ 00:01:43.160 START TEST make 00:01:43.160 ************************************ 00:01:43.160 07:37:27 make -- common/autotest_common.sh@1123 -- $ make -j96 00:01:43.160 make[1]: Nothing to be done for 'all'. 00:01:44.104 The Meson build system 00:01:44.104 Version: 1.3.1 00:01:44.104 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:44.104 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:44.104 Build type: native build 00:01:44.104 Project name: libvfio-user 00:01:44.104 Project version: 0.0.1 00:01:44.104 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:44.104 C linker for the host machine: cc ld.bfd 2.39-16 00:01:44.104 Host machine cpu family: x86_64 00:01:44.104 Host machine cpu: x86_64 00:01:44.104 Run-time dependency threads found: YES 00:01:44.104 Library dl found: YES 00:01:44.104 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:44.104 Run-time dependency json-c found: YES 0.17 00:01:44.104 Run-time dependency cmocka found: YES 1.1.7 00:01:44.104 Program pytest-3 found: NO 00:01:44.104 Program flake8 found: NO 00:01:44.104 Program misspell-fixer found: NO 00:01:44.104 Program restructuredtext-lint found: NO 00:01:44.104 Program valgrind found: YES (/usr/bin/valgrind) 00:01:44.104 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:44.104 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:44.104 Compiler for C supports arguments -Wwrite-strings: YES 00:01:44.104 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:44.104 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:44.104 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:44.104 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:44.104 Build targets in project: 8 00:01:44.104 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:44.104 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:44.104 00:01:44.104 libvfio-user 0.0.1 00:01:44.104 00:01:44.104 User defined options 00:01:44.104 buildtype : debug 00:01:44.104 default_library: shared 00:01:44.104 libdir : /usr/local/lib 00:01:44.104 00:01:44.104 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:44.669 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:44.927 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:44.927 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:44.927 [3/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:44.927 [4/37] Compiling C object samples/null.p/null.c.o 00:01:44.927 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:44.927 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:44.927 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:44.927 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:44.927 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:44.927 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:44.927 [11/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:44.927 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:44.927 [13/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:44.927 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:44.927 [15/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:44.927 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:44.927 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:44.927 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:44.927 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:44.927 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:44.927 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:44.927 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:44.927 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:44.927 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:44.927 [25/37] Compiling C object samples/server.p/server.c.o 00:01:44.927 [26/37] Compiling C object samples/client.p/client.c.o 00:01:44.927 [27/37] Linking target samples/client 00:01:44.927 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:44.927 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:44.927 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:45.184 [31/37] Linking target test/unit_tests 00:01:45.184 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:45.184 [33/37] Linking target samples/shadow_ioeventfd_server 00:01:45.184 [34/37] Linking target samples/null 00:01:45.184 [35/37] Linking target samples/gpio-pci-idio-16 00:01:45.184 [36/37] Linking target samples/server 00:01:45.184 [37/37] Linking target samples/lspci 00:01:45.184 INFO: autodetecting backend as ninja 00:01:45.184 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:45.184 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:45.750 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:45.750 ninja: no work to do. 00:01:51.017 The Meson build system 00:01:51.017 Version: 1.3.1 00:01:51.017 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:51.017 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:51.017 Build type: native build 00:01:51.017 Program cat found: YES (/usr/bin/cat) 00:01:51.017 Project name: DPDK 00:01:51.017 Project version: 24.03.0 00:01:51.017 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:51.017 C linker for the host machine: cc ld.bfd 2.39-16 00:01:51.017 Host machine cpu family: x86_64 00:01:51.017 Host machine cpu: x86_64 00:01:51.017 Message: ## Building in Developer Mode ## 00:01:51.017 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:51.017 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:51.017 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:51.017 Program python3 found: YES (/usr/bin/python3) 00:01:51.017 Program cat found: YES (/usr/bin/cat) 00:01:51.017 Compiler for C supports arguments -march=native: YES 00:01:51.017 Checking for size of "void *" : 8 00:01:51.017 Checking for size of "void *" : 8 (cached) 00:01:51.017 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:51.017 Library m found: YES 00:01:51.017 Library numa found: YES 00:01:51.017 Has header "numaif.h" : YES 00:01:51.017 Library fdt found: NO 00:01:51.017 Library execinfo found: NO 00:01:51.017 Has header "execinfo.h" : YES 00:01:51.017 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:51.017 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:51.017 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:51.017 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:51.017 Run-time dependency openssl found: YES 3.0.9 00:01:51.017 Run-time dependency libpcap found: YES 1.10.4 00:01:51.017 Has header "pcap.h" with dependency libpcap: YES 00:01:51.017 Compiler for C supports arguments -Wcast-qual: YES 00:01:51.017 Compiler for C supports arguments -Wdeprecated: YES 00:01:51.017 Compiler for C supports arguments -Wformat: YES 00:01:51.017 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:51.017 Compiler for C supports arguments -Wformat-security: NO 00:01:51.017 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:51.017 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:51.017 Compiler for C supports arguments -Wnested-externs: YES 00:01:51.017 Compiler for C supports arguments -Wold-style-definition: YES 00:01:51.017 Compiler for C supports arguments -Wpointer-arith: YES 00:01:51.017 Compiler for C supports arguments -Wsign-compare: YES 00:01:51.017 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:51.017 Compiler for C supports arguments -Wundef: YES 00:01:51.017 Compiler for C supports arguments -Wwrite-strings: YES 00:01:51.017 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:51.017 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:51.017 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:51.017 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:51.017 Program objdump found: YES (/usr/bin/objdump) 00:01:51.017 Compiler for C supports arguments -mavx512f: YES 00:01:51.017 Checking if "AVX512 checking" compiles: YES 00:01:51.017 Fetching value of define "__SSE4_2__" : 1 00:01:51.017 Fetching value of define "__AES__" : 1 00:01:51.017 Fetching value of define "__AVX__" : 1 00:01:51.017 Fetching value of define "__AVX2__" : 1 00:01:51.017 Fetching value of define "__AVX512BW__" : 1 00:01:51.017 Fetching value of define "__AVX512CD__" : 1 00:01:51.017 Fetching value of define "__AVX512DQ__" : 1 00:01:51.017 Fetching value of define "__AVX512F__" : 1 00:01:51.017 Fetching value of define "__AVX512VL__" : 1 00:01:51.017 Fetching value of define "__PCLMUL__" : 1 00:01:51.017 Fetching value of define "__RDRND__" : 1 00:01:51.017 Fetching value of define "__RDSEED__" : 1 00:01:51.017 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:51.017 Fetching value of define "__znver1__" : (undefined) 00:01:51.017 Fetching value of define "__znver2__" : (undefined) 00:01:51.017 Fetching value of define "__znver3__" : (undefined) 00:01:51.017 Fetching value of define "__znver4__" : (undefined) 00:01:51.017 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:51.017 Message: lib/log: Defining dependency "log" 00:01:51.017 Message: lib/kvargs: Defining dependency "kvargs" 00:01:51.017 Message: lib/telemetry: Defining dependency "telemetry" 00:01:51.017 Checking for function "getentropy" : NO 00:01:51.017 Message: lib/eal: Defining dependency "eal" 00:01:51.017 Message: lib/ring: Defining dependency "ring" 00:01:51.017 Message: lib/rcu: Defining dependency "rcu" 00:01:51.017 Message: lib/mempool: Defining dependency "mempool" 00:01:51.017 Message: lib/mbuf: Defining dependency "mbuf" 00:01:51.017 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:51.017 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:51.017 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:51.017 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:51.017 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:51.017 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:51.017 Compiler for C supports arguments -mpclmul: YES 00:01:51.017 Compiler for C supports arguments -maes: YES 00:01:51.017 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:51.017 Compiler for C supports arguments -mavx512bw: YES 00:01:51.017 Compiler for C supports arguments -mavx512dq: YES 00:01:51.017 Compiler for C supports arguments -mavx512vl: YES 00:01:51.017 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:51.017 Compiler for C supports arguments -mavx2: YES 00:01:51.017 Compiler for C supports arguments -mavx: YES 00:01:51.017 Message: lib/net: Defining dependency "net" 00:01:51.017 Message: lib/meter: Defining dependency "meter" 00:01:51.017 Message: lib/ethdev: Defining dependency "ethdev" 00:01:51.017 Message: lib/pci: Defining dependency "pci" 00:01:51.017 Message: lib/cmdline: Defining dependency "cmdline" 00:01:51.017 Message: lib/hash: Defining dependency "hash" 00:01:51.017 Message: lib/timer: Defining dependency "timer" 00:01:51.017 Message: lib/compressdev: Defining dependency "compressdev" 00:01:51.017 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:51.017 Message: lib/dmadev: Defining dependency "dmadev" 00:01:51.017 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:51.017 Message: lib/power: Defining dependency "power" 00:01:51.017 Message: lib/reorder: Defining dependency "reorder" 00:01:51.017 Message: lib/security: Defining dependency "security" 00:01:51.017 Has header "linux/userfaultfd.h" : YES 00:01:51.017 Has header "linux/vduse.h" : YES 00:01:51.017 Message: lib/vhost: Defining dependency "vhost" 00:01:51.017 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:51.017 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:51.017 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:51.017 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:51.017 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:51.017 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:51.017 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:51.017 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:51.017 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:51.017 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:51.017 Program doxygen found: YES (/usr/bin/doxygen) 00:01:51.017 Configuring doxy-api-html.conf using configuration 00:01:51.017 Configuring doxy-api-man.conf using configuration 00:01:51.017 Program mandb found: YES (/usr/bin/mandb) 00:01:51.017 Program sphinx-build found: NO 00:01:51.017 Configuring rte_build_config.h using configuration 00:01:51.017 Message: 00:01:51.017 ================= 00:01:51.018 Applications Enabled 00:01:51.018 ================= 00:01:51.018 00:01:51.018 apps: 00:01:51.018 00:01:51.018 00:01:51.018 Message: 00:01:51.018 ================= 00:01:51.018 Libraries Enabled 00:01:51.018 ================= 00:01:51.018 00:01:51.018 libs: 00:01:51.018 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:51.018 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:51.018 cryptodev, dmadev, power, reorder, security, vhost, 00:01:51.018 00:01:51.018 Message: 00:01:51.018 =============== 00:01:51.018 Drivers Enabled 00:01:51.018 =============== 00:01:51.018 00:01:51.018 common: 00:01:51.018 00:01:51.018 bus: 00:01:51.018 pci, vdev, 00:01:51.018 mempool: 00:01:51.018 ring, 00:01:51.018 dma: 00:01:51.018 00:01:51.018 net: 00:01:51.018 00:01:51.018 crypto: 00:01:51.018 00:01:51.018 compress: 00:01:51.018 00:01:51.018 vdpa: 00:01:51.018 00:01:51.018 00:01:51.018 Message: 00:01:51.018 ================= 00:01:51.018 Content Skipped 00:01:51.018 ================= 00:01:51.018 00:01:51.018 apps: 00:01:51.018 dumpcap: explicitly disabled via build config 00:01:51.018 graph: explicitly disabled via build config 00:01:51.018 pdump: explicitly disabled via build config 00:01:51.018 proc-info: explicitly disabled via build config 00:01:51.018 test-acl: explicitly disabled via build config 00:01:51.018 test-bbdev: explicitly disabled via build config 00:01:51.018 test-cmdline: explicitly disabled via build config 00:01:51.018 test-compress-perf: explicitly disabled via build config 00:01:51.018 test-crypto-perf: explicitly disabled via build config 00:01:51.018 test-dma-perf: explicitly disabled via build config 00:01:51.018 test-eventdev: explicitly disabled via build config 00:01:51.018 test-fib: explicitly disabled via build config 00:01:51.018 test-flow-perf: explicitly disabled via build config 00:01:51.018 test-gpudev: explicitly disabled via build config 00:01:51.018 test-mldev: explicitly disabled via build config 00:01:51.018 test-pipeline: explicitly disabled via build config 00:01:51.018 test-pmd: explicitly disabled via build config 00:01:51.018 test-regex: explicitly disabled via build config 00:01:51.018 test-sad: explicitly disabled via build config 00:01:51.018 test-security-perf: explicitly disabled via build config 00:01:51.018 00:01:51.018 libs: 00:01:51.018 argparse: explicitly disabled via build config 00:01:51.018 metrics: explicitly disabled via build config 00:01:51.018 acl: explicitly disabled via build config 00:01:51.018 bbdev: explicitly disabled via build config 00:01:51.018 bitratestats: explicitly disabled via build config 00:01:51.018 bpf: explicitly disabled via build config 00:01:51.018 cfgfile: explicitly disabled via build config 00:01:51.018 distributor: explicitly disabled via build config 00:01:51.018 efd: explicitly disabled via build config 00:01:51.018 eventdev: explicitly disabled via build config 00:01:51.018 dispatcher: explicitly disabled via build config 00:01:51.018 gpudev: explicitly disabled via build config 00:01:51.018 gro: explicitly disabled via build config 00:01:51.018 gso: explicitly disabled via build config 00:01:51.018 ip_frag: explicitly disabled via build config 00:01:51.018 jobstats: explicitly disabled via build config 00:01:51.018 latencystats: explicitly disabled via build config 00:01:51.018 lpm: explicitly disabled via build config 00:01:51.018 member: explicitly disabled via build config 00:01:51.018 pcapng: explicitly disabled via build config 00:01:51.018 rawdev: explicitly disabled via build config 00:01:51.018 regexdev: explicitly disabled via build config 00:01:51.018 mldev: explicitly disabled via build config 00:01:51.018 rib: explicitly disabled via build config 00:01:51.018 sched: explicitly disabled via build config 00:01:51.018 stack: explicitly disabled via build config 00:01:51.018 ipsec: explicitly disabled via build config 00:01:51.018 pdcp: explicitly disabled via build config 00:01:51.018 fib: explicitly disabled via build config 00:01:51.018 port: explicitly disabled via build config 00:01:51.018 pdump: explicitly disabled via build config 00:01:51.018 table: explicitly disabled via build config 00:01:51.018 pipeline: explicitly disabled via build config 00:01:51.018 graph: explicitly disabled via build config 00:01:51.018 node: explicitly disabled via build config 00:01:51.018 00:01:51.018 drivers: 00:01:51.018 common/cpt: not in enabled drivers build config 00:01:51.018 common/dpaax: not in enabled drivers build config 00:01:51.018 common/iavf: not in enabled drivers build config 00:01:51.018 common/idpf: not in enabled drivers build config 00:01:51.018 common/ionic: not in enabled drivers build config 00:01:51.018 common/mvep: not in enabled drivers build config 00:01:51.018 common/octeontx: not in enabled drivers build config 00:01:51.018 bus/auxiliary: not in enabled drivers build config 00:01:51.018 bus/cdx: not in enabled drivers build config 00:01:51.018 bus/dpaa: not in enabled drivers build config 00:01:51.018 bus/fslmc: not in enabled drivers build config 00:01:51.018 bus/ifpga: not in enabled drivers build config 00:01:51.018 bus/platform: not in enabled drivers build config 00:01:51.018 bus/uacce: not in enabled drivers build config 00:01:51.018 bus/vmbus: not in enabled drivers build config 00:01:51.018 common/cnxk: not in enabled drivers build config 00:01:51.018 common/mlx5: not in enabled drivers build config 00:01:51.018 common/nfp: not in enabled drivers build config 00:01:51.018 common/nitrox: not in enabled drivers build config 00:01:51.018 common/qat: not in enabled drivers build config 00:01:51.018 common/sfc_efx: not in enabled drivers build config 00:01:51.018 mempool/bucket: not in enabled drivers build config 00:01:51.018 mempool/cnxk: not in enabled drivers build config 00:01:51.018 mempool/dpaa: not in enabled drivers build config 00:01:51.018 mempool/dpaa2: not in enabled drivers build config 00:01:51.018 mempool/octeontx: not in enabled drivers build config 00:01:51.018 mempool/stack: not in enabled drivers build config 00:01:51.018 dma/cnxk: not in enabled drivers build config 00:01:51.018 dma/dpaa: not in enabled drivers build config 00:01:51.018 dma/dpaa2: not in enabled drivers build config 00:01:51.018 dma/hisilicon: not in enabled drivers build config 00:01:51.018 dma/idxd: not in enabled drivers build config 00:01:51.018 dma/ioat: not in enabled drivers build config 00:01:51.018 dma/skeleton: not in enabled drivers build config 00:01:51.018 net/af_packet: not in enabled drivers build config 00:01:51.018 net/af_xdp: not in enabled drivers build config 00:01:51.018 net/ark: not in enabled drivers build config 00:01:51.018 net/atlantic: not in enabled drivers build config 00:01:51.018 net/avp: not in enabled drivers build config 00:01:51.018 net/axgbe: not in enabled drivers build config 00:01:51.018 net/bnx2x: not in enabled drivers build config 00:01:51.018 net/bnxt: not in enabled drivers build config 00:01:51.018 net/bonding: not in enabled drivers build config 00:01:51.018 net/cnxk: not in enabled drivers build config 00:01:51.018 net/cpfl: not in enabled drivers build config 00:01:51.018 net/cxgbe: not in enabled drivers build config 00:01:51.018 net/dpaa: not in enabled drivers build config 00:01:51.018 net/dpaa2: not in enabled drivers build config 00:01:51.018 net/e1000: not in enabled drivers build config 00:01:51.018 net/ena: not in enabled drivers build config 00:01:51.018 net/enetc: not in enabled drivers build config 00:01:51.018 net/enetfec: not in enabled drivers build config 00:01:51.018 net/enic: not in enabled drivers build config 00:01:51.018 net/failsafe: not in enabled drivers build config 00:01:51.018 net/fm10k: not in enabled drivers build config 00:01:51.018 net/gve: not in enabled drivers build config 00:01:51.018 net/hinic: not in enabled drivers build config 00:01:51.018 net/hns3: not in enabled drivers build config 00:01:51.018 net/i40e: not in enabled drivers build config 00:01:51.018 net/iavf: not in enabled drivers build config 00:01:51.018 net/ice: not in enabled drivers build config 00:01:51.018 net/idpf: not in enabled drivers build config 00:01:51.018 net/igc: not in enabled drivers build config 00:01:51.018 net/ionic: not in enabled drivers build config 00:01:51.018 net/ipn3ke: not in enabled drivers build config 00:01:51.018 net/ixgbe: not in enabled drivers build config 00:01:51.018 net/mana: not in enabled drivers build config 00:01:51.018 net/memif: not in enabled drivers build config 00:01:51.018 net/mlx4: not in enabled drivers build config 00:01:51.018 net/mlx5: not in enabled drivers build config 00:01:51.018 net/mvneta: not in enabled drivers build config 00:01:51.018 net/mvpp2: not in enabled drivers build config 00:01:51.018 net/netvsc: not in enabled drivers build config 00:01:51.018 net/nfb: not in enabled drivers build config 00:01:51.018 net/nfp: not in enabled drivers build config 00:01:51.018 net/ngbe: not in enabled drivers build config 00:01:51.018 net/null: not in enabled drivers build config 00:01:51.018 net/octeontx: not in enabled drivers build config 00:01:51.018 net/octeon_ep: not in enabled drivers build config 00:01:51.018 net/pcap: not in enabled drivers build config 00:01:51.018 net/pfe: not in enabled drivers build config 00:01:51.018 net/qede: not in enabled drivers build config 00:01:51.018 net/ring: not in enabled drivers build config 00:01:51.018 net/sfc: not in enabled drivers build config 00:01:51.018 net/softnic: not in enabled drivers build config 00:01:51.018 net/tap: not in enabled drivers build config 00:01:51.018 net/thunderx: not in enabled drivers build config 00:01:51.018 net/txgbe: not in enabled drivers build config 00:01:51.018 net/vdev_netvsc: not in enabled drivers build config 00:01:51.018 net/vhost: not in enabled drivers build config 00:01:51.018 net/virtio: not in enabled drivers build config 00:01:51.018 net/vmxnet3: not in enabled drivers build config 00:01:51.018 raw/*: missing internal dependency, "rawdev" 00:01:51.018 crypto/armv8: not in enabled drivers build config 00:01:51.018 crypto/bcmfs: not in enabled drivers build config 00:01:51.018 crypto/caam_jr: not in enabled drivers build config 00:01:51.018 crypto/ccp: not in enabled drivers build config 00:01:51.018 crypto/cnxk: not in enabled drivers build config 00:01:51.018 crypto/dpaa_sec: not in enabled drivers build config 00:01:51.018 crypto/dpaa2_sec: not in enabled drivers build config 00:01:51.018 crypto/ipsec_mb: not in enabled drivers build config 00:01:51.018 crypto/mlx5: not in enabled drivers build config 00:01:51.018 crypto/mvsam: not in enabled drivers build config 00:01:51.018 crypto/nitrox: not in enabled drivers build config 00:01:51.018 crypto/null: not in enabled drivers build config 00:01:51.018 crypto/octeontx: not in enabled drivers build config 00:01:51.018 crypto/openssl: not in enabled drivers build config 00:01:51.018 crypto/scheduler: not in enabled drivers build config 00:01:51.018 crypto/uadk: not in enabled drivers build config 00:01:51.018 crypto/virtio: not in enabled drivers build config 00:01:51.018 compress/isal: not in enabled drivers build config 00:01:51.018 compress/mlx5: not in enabled drivers build config 00:01:51.018 compress/nitrox: not in enabled drivers build config 00:01:51.019 compress/octeontx: not in enabled drivers build config 00:01:51.019 compress/zlib: not in enabled drivers build config 00:01:51.019 regex/*: missing internal dependency, "regexdev" 00:01:51.019 ml/*: missing internal dependency, "mldev" 00:01:51.019 vdpa/ifc: not in enabled drivers build config 00:01:51.019 vdpa/mlx5: not in enabled drivers build config 00:01:51.019 vdpa/nfp: not in enabled drivers build config 00:01:51.019 vdpa/sfc: not in enabled drivers build config 00:01:51.019 event/*: missing internal dependency, "eventdev" 00:01:51.019 baseband/*: missing internal dependency, "bbdev" 00:01:51.019 gpu/*: missing internal dependency, "gpudev" 00:01:51.019 00:01:51.019 00:01:51.019 Build targets in project: 85 00:01:51.019 00:01:51.019 DPDK 24.03.0 00:01:51.019 00:01:51.019 User defined options 00:01:51.019 buildtype : debug 00:01:51.019 default_library : shared 00:01:51.019 libdir : lib 00:01:51.019 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:51.019 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:51.019 c_link_args : 00:01:51.019 cpu_instruction_set: native 00:01:51.019 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:51.019 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:51.019 enable_docs : false 00:01:51.019 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:51.019 enable_kmods : false 00:01:51.019 max_lcores : 128 00:01:51.019 tests : false 00:01:51.019 00:01:51.019 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:51.282 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:51.282 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:51.282 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:51.282 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:51.283 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:51.283 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:51.283 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:51.283 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:51.283 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:51.283 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:51.544 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:51.544 [11/268] Linking static target lib/librte_kvargs.a 00:01:51.544 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:51.544 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:51.544 [14/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:51.544 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:51.544 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:51.544 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:51.544 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:51.544 [19/268] Linking static target lib/librte_log.a 00:01:51.544 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:51.544 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:51.544 [22/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:51.544 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:51.544 [24/268] Linking static target lib/librte_pci.a 00:01:51.807 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:51.807 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:51.807 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:51.807 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:51.807 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:51.807 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:51.807 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:51.807 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:51.807 [33/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:51.807 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:51.807 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:51.807 [36/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:51.807 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:51.807 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:51.807 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:51.807 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:51.807 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:51.807 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:51.807 [43/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:51.807 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:51.807 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:51.807 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:51.807 [47/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:51.807 [48/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:51.807 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:51.807 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:51.807 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:51.807 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:51.807 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:51.808 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:51.808 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:51.808 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:51.808 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:51.808 [58/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:51.808 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:51.808 [60/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:51.808 [61/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:51.808 [62/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:51.808 [63/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:51.808 [64/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:51.808 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:51.808 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:51.808 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:51.808 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:51.808 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:51.808 [70/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:51.808 [71/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:51.808 [72/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:51.808 [73/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:51.808 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:51.808 [75/268] Linking static target lib/librte_meter.a 00:01:51.808 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:51.808 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:51.808 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:51.808 [79/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:51.808 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:51.808 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:52.067 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:52.067 [83/268] Linking static target lib/librte_ring.a 00:01:52.067 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:52.067 [85/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:52.067 [86/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:52.067 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:52.067 [88/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:52.067 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:52.067 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:52.067 [91/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.067 [92/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:52.067 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:52.067 [94/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:52.067 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:52.067 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:52.067 [97/268] Linking static target lib/librte_telemetry.a 00:01:52.067 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:52.067 [99/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:52.067 [100/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:52.067 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:52.067 [102/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:52.067 [103/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:52.067 [104/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:52.067 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:52.067 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:52.067 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:52.067 [108/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:52.067 [109/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:52.067 [110/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.067 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:52.067 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:52.067 [113/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:52.067 [114/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:52.067 [115/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:52.067 [116/268] Linking static target lib/librte_net.a 00:01:52.067 [117/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:52.067 [118/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:52.067 [119/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:52.067 [120/268] Linking static target lib/librte_mempool.a 00:01:52.067 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:52.067 [122/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:52.067 [123/268] Linking static target lib/librte_eal.a 00:01:52.067 [124/268] Linking static target lib/librte_rcu.a 00:01:52.067 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:52.067 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:52.067 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:52.067 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:52.067 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:52.067 [130/268] Linking static target lib/librte_cmdline.a 00:01:52.067 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:52.067 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:52.067 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:52.067 [134/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:52.067 [135/268] Linking static target lib/librte_mbuf.a 00:01:52.326 [136/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.326 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:52.326 [138/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.326 [139/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.326 [140/268] Linking target lib/librte_log.so.24.1 00:01:52.326 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:52.326 [142/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:52.326 [143/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:52.326 [144/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:52.326 [145/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.326 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:52.326 [147/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:52.326 [148/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:52.326 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:52.326 [150/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:52.326 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:52.326 [152/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:52.326 [153/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:52.326 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:52.326 [155/268] Linking static target lib/librte_dmadev.a 00:01:52.326 [156/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:52.326 [157/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:52.326 [158/268] Linking static target lib/librte_timer.a 00:01:52.326 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:52.326 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:52.326 [161/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:52.326 [162/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:52.326 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:52.326 [164/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.326 [165/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.326 [166/268] Linking target lib/librte_kvargs.so.24.1 00:01:52.326 [167/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:52.326 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:52.326 [169/268] Linking target lib/librte_telemetry.so.24.1 00:01:52.326 [170/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:52.326 [171/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:52.326 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:52.326 [173/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:52.326 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:52.326 [175/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:52.586 [176/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:52.586 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:52.586 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:52.586 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:52.586 [180/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:52.586 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:52.586 [182/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:52.586 [183/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:52.586 [184/268] Linking static target lib/librte_reorder.a 00:01:52.586 [185/268] Linking static target lib/librte_compressdev.a 00:01:52.586 [186/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:52.586 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:52.586 [188/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:52.586 [189/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:52.586 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:52.586 [191/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:52.586 [192/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:52.586 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:52.586 [194/268] Linking static target lib/librte_hash.a 00:01:52.586 [195/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:52.586 [196/268] Linking static target lib/librte_power.a 00:01:52.586 [197/268] Linking static target lib/librte_security.a 00:01:52.586 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:52.586 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:52.586 [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:52.586 [201/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:52.586 [202/268] Linking static target drivers/librte_bus_vdev.a 00:01:52.586 [203/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:52.586 [204/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.586 [205/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.586 [206/268] Linking static target drivers/librte_mempool_ring.a 00:01:52.845 [207/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:52.845 [208/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.845 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:52.845 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:52.845 [211/268] Linking static target drivers/librte_bus_pci.a 00:01:52.845 [212/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.845 [213/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.845 [214/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:52.845 [215/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:52.845 [216/268] Linking static target lib/librte_ethdev.a 00:01:52.845 [217/268] Linking static target lib/librte_cryptodev.a 00:01:52.845 [218/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.103 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.103 [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.103 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.103 [222/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.103 [223/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.363 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:53.363 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.363 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.622 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.189 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:54.448 [229/268] Linking static target lib/librte_vhost.a 00:01:54.707 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.613 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.890 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.458 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.458 [234/268] Linking target lib/librte_eal.so.24.1 00:02:02.458 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:02.716 [236/268] Linking target lib/librte_timer.so.24.1 00:02:02.716 [237/268] Linking target lib/librte_ring.so.24.1 00:02:02.716 [238/268] Linking target lib/librte_meter.so.24.1 00:02:02.716 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:02.716 [240/268] Linking target lib/librte_dmadev.so.24.1 00:02:02.716 [241/268] Linking target lib/librte_pci.so.24.1 00:02:02.716 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:02.716 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:02.716 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:02.716 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:02.716 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:02.716 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:02.716 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:02.716 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:02.975 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:02.975 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:02.975 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:02.975 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:02.975 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:03.234 [255/268] Linking target lib/librte_net.so.24.1 00:02:03.234 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:03.234 [257/268] Linking target lib/librte_compressdev.so.24.1 00:02:03.234 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:03.234 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:03.234 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:03.234 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:03.234 [262/268] Linking target lib/librte_hash.so.24.1 00:02:03.234 [263/268] Linking target lib/librte_security.so.24.1 00:02:03.234 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:03.493 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:03.493 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:03.493 [267/268] Linking target lib/librte_power.so.24.1 00:02:03.493 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:03.493 INFO: autodetecting backend as ninja 00:02:03.493 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:04.431 CC lib/ut/ut.o 00:02:04.431 CC lib/ut_mock/mock.o 00:02:04.431 CC lib/log/log.o 00:02:04.431 CC lib/log/log_flags.o 00:02:04.431 CC lib/log/log_deprecated.o 00:02:04.690 LIB libspdk_ut.a 00:02:04.690 LIB libspdk_ut_mock.a 00:02:04.690 LIB libspdk_log.a 00:02:04.690 SO libspdk_ut.so.2.0 00:02:04.690 SO libspdk_ut_mock.so.6.0 00:02:04.690 SO libspdk_log.so.7.0 00:02:04.690 SYMLINK libspdk_ut.so 00:02:04.690 SYMLINK libspdk_ut_mock.so 00:02:04.690 SYMLINK libspdk_log.so 00:02:05.304 CXX lib/trace_parser/trace.o 00:02:05.304 CC lib/ioat/ioat.o 00:02:05.304 CC lib/util/base64.o 00:02:05.304 CC lib/dma/dma.o 00:02:05.304 CC lib/util/bit_array.o 00:02:05.304 CC lib/util/cpuset.o 00:02:05.304 CC lib/util/crc16.o 00:02:05.304 CC lib/util/crc32.o 00:02:05.304 CC lib/util/crc32c.o 00:02:05.304 CC lib/util/crc32_ieee.o 00:02:05.304 CC lib/util/crc64.o 00:02:05.304 CC lib/util/dif.o 00:02:05.304 CC lib/util/fd.o 00:02:05.304 CC lib/util/file.o 00:02:05.304 CC lib/util/hexlify.o 00:02:05.304 CC lib/util/iov.o 00:02:05.304 CC lib/util/math.o 00:02:05.304 CC lib/util/pipe.o 00:02:05.304 CC lib/util/strerror_tls.o 00:02:05.304 CC lib/util/string.o 00:02:05.304 CC lib/util/uuid.o 00:02:05.304 CC lib/util/fd_group.o 00:02:05.304 CC lib/util/xor.o 00:02:05.304 CC lib/util/zipf.o 00:02:05.304 CC lib/vfio_user/host/vfio_user_pci.o 00:02:05.304 CC lib/vfio_user/host/vfio_user.o 00:02:05.304 LIB libspdk_dma.a 00:02:05.304 SO libspdk_dma.so.4.0 00:02:05.304 LIB libspdk_ioat.a 00:02:05.304 SYMLINK libspdk_dma.so 00:02:05.304 SO libspdk_ioat.so.7.0 00:02:05.577 SYMLINK libspdk_ioat.so 00:02:05.577 LIB libspdk_vfio_user.a 00:02:05.577 SO libspdk_vfio_user.so.5.0 00:02:05.577 LIB libspdk_util.a 00:02:05.577 SYMLINK libspdk_vfio_user.so 00:02:05.577 SO libspdk_util.so.9.1 00:02:05.835 SYMLINK libspdk_util.so 00:02:05.835 LIB libspdk_trace_parser.a 00:02:05.835 SO libspdk_trace_parser.so.5.0 00:02:05.835 SYMLINK libspdk_trace_parser.so 00:02:06.094 CC lib/rdma_utils/rdma_utils.o 00:02:06.094 CC lib/conf/conf.o 00:02:06.094 CC lib/vmd/vmd.o 00:02:06.094 CC lib/vmd/led.o 00:02:06.094 CC lib/idxd/idxd.o 00:02:06.094 CC lib/json/json_parse.o 00:02:06.094 CC lib/idxd/idxd_user.o 00:02:06.094 CC lib/rdma_provider/common.o 00:02:06.094 CC lib/json/json_util.o 00:02:06.094 CC lib/idxd/idxd_kernel.o 00:02:06.094 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:06.094 CC lib/env_dpdk/env.o 00:02:06.094 CC lib/json/json_write.o 00:02:06.094 CC lib/env_dpdk/memory.o 00:02:06.094 CC lib/env_dpdk/pci.o 00:02:06.094 CC lib/env_dpdk/init.o 00:02:06.094 CC lib/env_dpdk/threads.o 00:02:06.094 CC lib/env_dpdk/pci_ioat.o 00:02:06.094 CC lib/env_dpdk/pci_virtio.o 00:02:06.094 CC lib/env_dpdk/pci_vmd.o 00:02:06.094 CC lib/env_dpdk/pci_idxd.o 00:02:06.094 CC lib/env_dpdk/pci_event.o 00:02:06.094 CC lib/env_dpdk/sigbus_handler.o 00:02:06.094 CC lib/env_dpdk/pci_dpdk.o 00:02:06.094 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:06.094 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:06.353 LIB libspdk_conf.a 00:02:06.353 LIB libspdk_rdma_provider.a 00:02:06.353 SO libspdk_conf.so.6.0 00:02:06.353 SO libspdk_rdma_provider.so.6.0 00:02:06.353 LIB libspdk_rdma_utils.a 00:02:06.353 LIB libspdk_json.a 00:02:06.353 SO libspdk_rdma_utils.so.1.0 00:02:06.353 SYMLINK libspdk_conf.so 00:02:06.353 SYMLINK libspdk_rdma_provider.so 00:02:06.353 SO libspdk_json.so.6.0 00:02:06.353 SYMLINK libspdk_rdma_utils.so 00:02:06.353 SYMLINK libspdk_json.so 00:02:06.353 LIB libspdk_idxd.a 00:02:06.613 SO libspdk_idxd.so.12.0 00:02:06.613 LIB libspdk_vmd.a 00:02:06.613 SYMLINK libspdk_idxd.so 00:02:06.613 SO libspdk_vmd.so.6.0 00:02:06.613 SYMLINK libspdk_vmd.so 00:02:06.613 CC lib/jsonrpc/jsonrpc_server.o 00:02:06.613 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:06.613 CC lib/jsonrpc/jsonrpc_client.o 00:02:06.613 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:06.873 LIB libspdk_jsonrpc.a 00:02:06.873 SO libspdk_jsonrpc.so.6.0 00:02:07.133 SYMLINK libspdk_jsonrpc.so 00:02:07.133 LIB libspdk_env_dpdk.a 00:02:07.133 SO libspdk_env_dpdk.so.14.1 00:02:07.392 SYMLINK libspdk_env_dpdk.so 00:02:07.392 CC lib/rpc/rpc.o 00:02:07.392 LIB libspdk_rpc.a 00:02:07.651 SO libspdk_rpc.so.6.0 00:02:07.651 SYMLINK libspdk_rpc.so 00:02:07.909 CC lib/notify/notify.o 00:02:07.909 CC lib/notify/notify_rpc.o 00:02:07.909 CC lib/trace/trace.o 00:02:07.909 CC lib/keyring/keyring.o 00:02:07.909 CC lib/trace/trace_flags.o 00:02:07.909 CC lib/keyring/keyring_rpc.o 00:02:07.909 CC lib/trace/trace_rpc.o 00:02:08.168 LIB libspdk_notify.a 00:02:08.168 SO libspdk_notify.so.6.0 00:02:08.168 LIB libspdk_keyring.a 00:02:08.168 LIB libspdk_trace.a 00:02:08.168 SO libspdk_keyring.so.1.0 00:02:08.168 SYMLINK libspdk_notify.so 00:02:08.168 SO libspdk_trace.so.10.0 00:02:08.168 SYMLINK libspdk_keyring.so 00:02:08.168 SYMLINK libspdk_trace.so 00:02:08.427 CC lib/thread/thread.o 00:02:08.427 CC lib/sock/sock.o 00:02:08.427 CC lib/thread/iobuf.o 00:02:08.427 CC lib/sock/sock_rpc.o 00:02:08.993 LIB libspdk_sock.a 00:02:08.993 SO libspdk_sock.so.10.0 00:02:08.993 SYMLINK libspdk_sock.so 00:02:09.251 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:09.251 CC lib/nvme/nvme_ctrlr.o 00:02:09.251 CC lib/nvme/nvme_fabric.o 00:02:09.251 CC lib/nvme/nvme_ns_cmd.o 00:02:09.251 CC lib/nvme/nvme_ns.o 00:02:09.251 CC lib/nvme/nvme_pcie_common.o 00:02:09.251 CC lib/nvme/nvme_pcie.o 00:02:09.251 CC lib/nvme/nvme_qpair.o 00:02:09.251 CC lib/nvme/nvme.o 00:02:09.251 CC lib/nvme/nvme_quirks.o 00:02:09.251 CC lib/nvme/nvme_transport.o 00:02:09.251 CC lib/nvme/nvme_discovery.o 00:02:09.251 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:09.251 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:09.251 CC lib/nvme/nvme_tcp.o 00:02:09.251 CC lib/nvme/nvme_opal.o 00:02:09.251 CC lib/nvme/nvme_io_msg.o 00:02:09.251 CC lib/nvme/nvme_poll_group.o 00:02:09.251 CC lib/nvme/nvme_zns.o 00:02:09.251 CC lib/nvme/nvme_stubs.o 00:02:09.251 CC lib/nvme/nvme_cuse.o 00:02:09.251 CC lib/nvme/nvme_auth.o 00:02:09.251 CC lib/nvme/nvme_vfio_user.o 00:02:09.251 CC lib/nvme/nvme_rdma.o 00:02:09.509 LIB libspdk_thread.a 00:02:09.509 SO libspdk_thread.so.10.1 00:02:09.766 SYMLINK libspdk_thread.so 00:02:10.024 CC lib/accel/accel.o 00:02:10.024 CC lib/accel/accel_rpc.o 00:02:10.024 CC lib/accel/accel_sw.o 00:02:10.024 CC lib/virtio/virtio.o 00:02:10.024 CC lib/virtio/virtio_vhost_user.o 00:02:10.024 CC lib/virtio/virtio_vfio_user.o 00:02:10.024 CC lib/virtio/virtio_pci.o 00:02:10.024 CC lib/vfu_tgt/tgt_endpoint.o 00:02:10.024 CC lib/init/json_config.o 00:02:10.024 CC lib/vfu_tgt/tgt_rpc.o 00:02:10.024 CC lib/blob/blobstore.o 00:02:10.024 CC lib/init/subsystem.o 00:02:10.024 CC lib/init/subsystem_rpc.o 00:02:10.024 CC lib/blob/request.o 00:02:10.024 CC lib/init/rpc.o 00:02:10.024 CC lib/blob/zeroes.o 00:02:10.024 CC lib/blob/blob_bs_dev.o 00:02:10.283 LIB libspdk_init.a 00:02:10.283 LIB libspdk_vfu_tgt.a 00:02:10.283 SO libspdk_init.so.5.0 00:02:10.283 LIB libspdk_virtio.a 00:02:10.283 SO libspdk_vfu_tgt.so.3.0 00:02:10.283 SO libspdk_virtio.so.7.0 00:02:10.283 SYMLINK libspdk_init.so 00:02:10.283 SYMLINK libspdk_vfu_tgt.so 00:02:10.283 SYMLINK libspdk_virtio.so 00:02:10.542 CC lib/event/app.o 00:02:10.542 CC lib/event/reactor.o 00:02:10.542 CC lib/event/log_rpc.o 00:02:10.542 CC lib/event/app_rpc.o 00:02:10.542 CC lib/event/scheduler_static.o 00:02:10.542 LIB libspdk_accel.a 00:02:10.800 SO libspdk_accel.so.15.1 00:02:10.800 SYMLINK libspdk_accel.so 00:02:10.800 LIB libspdk_nvme.a 00:02:10.800 SO libspdk_nvme.so.13.1 00:02:10.800 LIB libspdk_event.a 00:02:11.058 SO libspdk_event.so.14.0 00:02:11.058 SYMLINK libspdk_event.so 00:02:11.058 CC lib/bdev/bdev.o 00:02:11.058 CC lib/bdev/bdev_rpc.o 00:02:11.058 CC lib/bdev/bdev_zone.o 00:02:11.058 CC lib/bdev/part.o 00:02:11.058 CC lib/bdev/scsi_nvme.o 00:02:11.058 SYMLINK libspdk_nvme.so 00:02:11.992 LIB libspdk_blob.a 00:02:11.992 SO libspdk_blob.so.11.0 00:02:12.251 SYMLINK libspdk_blob.so 00:02:12.510 CC lib/blobfs/blobfs.o 00:02:12.510 CC lib/blobfs/tree.o 00:02:12.510 CC lib/lvol/lvol.o 00:02:12.768 LIB libspdk_bdev.a 00:02:12.768 SO libspdk_bdev.so.15.1 00:02:13.025 SYMLINK libspdk_bdev.so 00:02:13.025 LIB libspdk_blobfs.a 00:02:13.025 SO libspdk_blobfs.so.10.0 00:02:13.025 LIB libspdk_lvol.a 00:02:13.025 SO libspdk_lvol.so.10.0 00:02:13.025 SYMLINK libspdk_blobfs.so 00:02:13.284 SYMLINK libspdk_lvol.so 00:02:13.284 CC lib/scsi/dev.o 00:02:13.284 CC lib/scsi/lun.o 00:02:13.284 CC lib/scsi/port.o 00:02:13.284 CC lib/scsi/scsi.o 00:02:13.284 CC lib/scsi/scsi_bdev.o 00:02:13.284 CC lib/scsi/scsi_pr.o 00:02:13.284 CC lib/scsi/scsi_rpc.o 00:02:13.284 CC lib/scsi/task.o 00:02:13.284 CC lib/nvmf/ctrlr.o 00:02:13.284 CC lib/ublk/ublk.o 00:02:13.284 CC lib/nvmf/ctrlr_discovery.o 00:02:13.284 CC lib/ublk/ublk_rpc.o 00:02:13.284 CC lib/nbd/nbd.o 00:02:13.284 CC lib/ftl/ftl_core.o 00:02:13.284 CC lib/nvmf/ctrlr_bdev.o 00:02:13.284 CC lib/nbd/nbd_rpc.o 00:02:13.284 CC lib/ftl/ftl_init.o 00:02:13.284 CC lib/nvmf/subsystem.o 00:02:13.284 CC lib/ftl/ftl_layout.o 00:02:13.284 CC lib/nvmf/nvmf.o 00:02:13.284 CC lib/ftl/ftl_debug.o 00:02:13.284 CC lib/nvmf/nvmf_rpc.o 00:02:13.284 CC lib/nvmf/transport.o 00:02:13.284 CC lib/ftl/ftl_io.o 00:02:13.284 CC lib/ftl/ftl_sb.o 00:02:13.284 CC lib/ftl/ftl_l2p.o 00:02:13.284 CC lib/nvmf/tcp.o 00:02:13.284 CC lib/nvmf/stubs.o 00:02:13.284 CC lib/ftl/ftl_l2p_flat.o 00:02:13.284 CC lib/ftl/ftl_nv_cache.o 00:02:13.284 CC lib/nvmf/mdns_server.o 00:02:13.284 CC lib/ftl/ftl_band.o 00:02:13.284 CC lib/nvmf/rdma.o 00:02:13.284 CC lib/nvmf/vfio_user.o 00:02:13.284 CC lib/ftl/ftl_band_ops.o 00:02:13.284 CC lib/ftl/ftl_writer.o 00:02:13.284 CC lib/nvmf/auth.o 00:02:13.284 CC lib/ftl/ftl_rq.o 00:02:13.284 CC lib/ftl/ftl_reloc.o 00:02:13.284 CC lib/ftl/ftl_l2p_cache.o 00:02:13.284 CC lib/ftl/ftl_p2l.o 00:02:13.284 CC lib/ftl/mngt/ftl_mngt.o 00:02:13.284 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:13.284 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:13.284 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:13.284 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:13.284 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:13.284 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:13.284 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:13.284 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:13.284 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:13.284 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:13.284 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:13.284 CC lib/ftl/utils/ftl_md.o 00:02:13.284 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:13.284 CC lib/ftl/utils/ftl_conf.o 00:02:13.284 CC lib/ftl/utils/ftl_bitmap.o 00:02:13.284 CC lib/ftl/utils/ftl_mempool.o 00:02:13.284 CC lib/ftl/utils/ftl_property.o 00:02:13.284 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:13.284 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:13.284 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:13.284 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:13.284 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:13.284 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:13.284 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:13.284 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:13.284 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:13.284 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:13.284 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:13.284 CC lib/ftl/ftl_trace.o 00:02:13.284 CC lib/ftl/base/ftl_base_dev.o 00:02:13.284 CC lib/ftl/base/ftl_base_bdev.o 00:02:13.877 LIB libspdk_nbd.a 00:02:13.877 SO libspdk_nbd.so.7.0 00:02:13.877 SYMLINK libspdk_nbd.so 00:02:13.877 LIB libspdk_scsi.a 00:02:13.877 LIB libspdk_ublk.a 00:02:14.136 SO libspdk_ublk.so.3.0 00:02:14.136 SO libspdk_scsi.so.9.0 00:02:14.136 SYMLINK libspdk_ublk.so 00:02:14.136 SYMLINK libspdk_scsi.so 00:02:14.136 LIB libspdk_ftl.a 00:02:14.394 SO libspdk_ftl.so.9.0 00:02:14.394 CC lib/iscsi/conn.o 00:02:14.394 CC lib/vhost/vhost_rpc.o 00:02:14.394 CC lib/iscsi/init_grp.o 00:02:14.394 CC lib/vhost/vhost.o 00:02:14.394 CC lib/iscsi/iscsi.o 00:02:14.394 CC lib/iscsi/md5.o 00:02:14.394 CC lib/iscsi/param.o 00:02:14.394 CC lib/vhost/vhost_scsi.o 00:02:14.394 CC lib/vhost/vhost_blk.o 00:02:14.394 CC lib/iscsi/portal_grp.o 00:02:14.394 CC lib/iscsi/iscsi_subsystem.o 00:02:14.394 CC lib/iscsi/tgt_node.o 00:02:14.394 CC lib/vhost/rte_vhost_user.o 00:02:14.394 CC lib/iscsi/iscsi_rpc.o 00:02:14.394 CC lib/iscsi/task.o 00:02:14.653 SYMLINK libspdk_ftl.so 00:02:15.221 LIB libspdk_nvmf.a 00:02:15.221 SO libspdk_nvmf.so.18.1 00:02:15.221 LIB libspdk_vhost.a 00:02:15.221 SO libspdk_vhost.so.8.0 00:02:15.221 SYMLINK libspdk_nvmf.so 00:02:15.480 SYMLINK libspdk_vhost.so 00:02:15.480 LIB libspdk_iscsi.a 00:02:15.480 SO libspdk_iscsi.so.8.0 00:02:15.740 SYMLINK libspdk_iscsi.so 00:02:15.999 CC module/vfu_device/vfu_virtio.o 00:02:15.999 CC module/vfu_device/vfu_virtio_blk.o 00:02:15.999 CC module/vfu_device/vfu_virtio_scsi.o 00:02:15.999 CC module/vfu_device/vfu_virtio_rpc.o 00:02:15.999 CC module/env_dpdk/env_dpdk_rpc.o 00:02:16.257 CC module/accel/iaa/accel_iaa.o 00:02:16.257 CC module/accel/error/accel_error.o 00:02:16.257 CC module/accel/iaa/accel_iaa_rpc.o 00:02:16.257 CC module/accel/dsa/accel_dsa.o 00:02:16.257 CC module/accel/dsa/accel_dsa_rpc.o 00:02:16.257 CC module/accel/error/accel_error_rpc.o 00:02:16.257 CC module/accel/ioat/accel_ioat_rpc.o 00:02:16.257 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:16.258 CC module/accel/ioat/accel_ioat.o 00:02:16.258 LIB libspdk_env_dpdk_rpc.a 00:02:16.258 CC module/keyring/linux/keyring.o 00:02:16.258 CC module/keyring/file/keyring_rpc.o 00:02:16.258 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:16.258 CC module/keyring/file/keyring.o 00:02:16.258 CC module/keyring/linux/keyring_rpc.o 00:02:16.258 CC module/blob/bdev/blob_bdev.o 00:02:16.258 CC module/scheduler/gscheduler/gscheduler.o 00:02:16.258 CC module/sock/posix/posix.o 00:02:16.258 SO libspdk_env_dpdk_rpc.so.6.0 00:02:16.258 SYMLINK libspdk_env_dpdk_rpc.so 00:02:16.517 LIB libspdk_keyring_linux.a 00:02:16.517 LIB libspdk_keyring_file.a 00:02:16.517 LIB libspdk_accel_error.a 00:02:16.517 LIB libspdk_scheduler_gscheduler.a 00:02:16.517 LIB libspdk_scheduler_dpdk_governor.a 00:02:16.517 LIB libspdk_accel_ioat.a 00:02:16.517 SO libspdk_keyring_linux.so.1.0 00:02:16.517 LIB libspdk_accel_iaa.a 00:02:16.517 SO libspdk_keyring_file.so.1.0 00:02:16.517 SO libspdk_accel_error.so.2.0 00:02:16.517 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:16.517 SO libspdk_scheduler_gscheduler.so.4.0 00:02:16.517 LIB libspdk_scheduler_dynamic.a 00:02:16.517 SO libspdk_accel_ioat.so.6.0 00:02:16.517 SO libspdk_accel_iaa.so.3.0 00:02:16.517 SYMLINK libspdk_keyring_linux.so 00:02:16.517 LIB libspdk_accel_dsa.a 00:02:16.517 LIB libspdk_blob_bdev.a 00:02:16.517 SO libspdk_scheduler_dynamic.so.4.0 00:02:16.517 SYMLINK libspdk_keyring_file.so 00:02:16.517 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:16.517 SYMLINK libspdk_accel_error.so 00:02:16.517 SYMLINK libspdk_scheduler_gscheduler.so 00:02:16.517 SYMLINK libspdk_accel_ioat.so 00:02:16.517 SO libspdk_accel_dsa.so.5.0 00:02:16.517 SO libspdk_blob_bdev.so.11.0 00:02:16.517 SYMLINK libspdk_accel_iaa.so 00:02:16.517 SYMLINK libspdk_scheduler_dynamic.so 00:02:16.517 SYMLINK libspdk_accel_dsa.so 00:02:16.517 LIB libspdk_vfu_device.a 00:02:16.517 SYMLINK libspdk_blob_bdev.so 00:02:16.776 SO libspdk_vfu_device.so.3.0 00:02:16.776 SYMLINK libspdk_vfu_device.so 00:02:16.776 LIB libspdk_sock_posix.a 00:02:17.035 SO libspdk_sock_posix.so.6.0 00:02:17.035 SYMLINK libspdk_sock_posix.so 00:02:17.035 CC module/bdev/delay/vbdev_delay.o 00:02:17.035 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:17.035 CC module/bdev/gpt/gpt.o 00:02:17.035 CC module/bdev/gpt/vbdev_gpt.o 00:02:17.035 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:17.035 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:17.035 CC module/bdev/passthru/vbdev_passthru.o 00:02:17.035 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:17.035 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:17.035 CC module/bdev/error/vbdev_error.o 00:02:17.035 CC module/bdev/null/bdev_null.o 00:02:17.035 CC module/bdev/error/vbdev_error_rpc.o 00:02:17.035 CC module/bdev/null/bdev_null_rpc.o 00:02:17.035 CC module/bdev/malloc/bdev_malloc.o 00:02:17.035 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:17.035 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:17.035 CC module/bdev/nvme/bdev_nvme.o 00:02:17.035 CC module/bdev/nvme/nvme_rpc.o 00:02:17.035 CC module/bdev/nvme/bdev_mdns_client.o 00:02:17.035 CC module/bdev/nvme/vbdev_opal.o 00:02:17.035 CC module/bdev/iscsi/bdev_iscsi.o 00:02:17.035 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:17.035 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:17.035 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:17.035 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:17.035 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:17.035 CC module/blobfs/bdev/blobfs_bdev.o 00:02:17.035 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:17.035 CC module/bdev/ftl/bdev_ftl.o 00:02:17.035 CC module/bdev/aio/bdev_aio.o 00:02:17.035 CC module/bdev/aio/bdev_aio_rpc.o 00:02:17.035 CC module/bdev/lvol/vbdev_lvol.o 00:02:17.035 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:17.035 CC module/bdev/raid/bdev_raid_rpc.o 00:02:17.035 CC module/bdev/raid/bdev_raid.o 00:02:17.035 CC module/bdev/raid/bdev_raid_sb.o 00:02:17.035 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:17.035 CC module/bdev/raid/raid1.o 00:02:17.035 CC module/bdev/raid/raid0.o 00:02:17.035 CC module/bdev/raid/concat.o 00:02:17.035 CC module/bdev/split/vbdev_split.o 00:02:17.035 CC module/bdev/split/vbdev_split_rpc.o 00:02:17.294 LIB libspdk_blobfs_bdev.a 00:02:17.294 LIB libspdk_bdev_split.a 00:02:17.295 LIB libspdk_bdev_gpt.a 00:02:17.295 LIB libspdk_bdev_passthru.a 00:02:17.295 SO libspdk_blobfs_bdev.so.6.0 00:02:17.295 SO libspdk_bdev_gpt.so.6.0 00:02:17.295 SO libspdk_bdev_split.so.6.0 00:02:17.295 LIB libspdk_bdev_ftl.a 00:02:17.295 LIB libspdk_bdev_null.a 00:02:17.295 SO libspdk_bdev_passthru.so.6.0 00:02:17.295 LIB libspdk_bdev_error.a 00:02:17.295 SO libspdk_bdev_null.so.6.0 00:02:17.295 SYMLINK libspdk_blobfs_bdev.so 00:02:17.555 SO libspdk_bdev_ftl.so.6.0 00:02:17.555 LIB libspdk_bdev_delay.a 00:02:17.555 SO libspdk_bdev_error.so.6.0 00:02:17.555 LIB libspdk_bdev_aio.a 00:02:17.555 LIB libspdk_bdev_zone_block.a 00:02:17.555 SYMLINK libspdk_bdev_gpt.so 00:02:17.555 SYMLINK libspdk_bdev_split.so 00:02:17.555 LIB libspdk_bdev_iscsi.a 00:02:17.555 SYMLINK libspdk_bdev_passthru.so 00:02:17.555 SO libspdk_bdev_zone_block.so.6.0 00:02:17.555 SO libspdk_bdev_delay.so.6.0 00:02:17.555 SYMLINK libspdk_bdev_null.so 00:02:17.555 SO libspdk_bdev_aio.so.6.0 00:02:17.555 SO libspdk_bdev_iscsi.so.6.0 00:02:17.555 LIB libspdk_bdev_malloc.a 00:02:17.555 SYMLINK libspdk_bdev_ftl.so 00:02:17.555 SYMLINK libspdk_bdev_error.so 00:02:17.555 SO libspdk_bdev_malloc.so.6.0 00:02:17.555 SYMLINK libspdk_bdev_zone_block.so 00:02:17.555 SYMLINK libspdk_bdev_delay.so 00:02:17.555 SYMLINK libspdk_bdev_iscsi.so 00:02:17.555 SYMLINK libspdk_bdev_aio.so 00:02:17.555 LIB libspdk_bdev_virtio.a 00:02:17.555 SYMLINK libspdk_bdev_malloc.so 00:02:17.555 LIB libspdk_bdev_lvol.a 00:02:17.555 SO libspdk_bdev_lvol.so.6.0 00:02:17.555 SO libspdk_bdev_virtio.so.6.0 00:02:17.813 SYMLINK libspdk_bdev_lvol.so 00:02:17.813 SYMLINK libspdk_bdev_virtio.so 00:02:17.813 LIB libspdk_bdev_raid.a 00:02:17.813 SO libspdk_bdev_raid.so.6.0 00:02:18.072 SYMLINK libspdk_bdev_raid.so 00:02:18.642 LIB libspdk_bdev_nvme.a 00:02:18.642 SO libspdk_bdev_nvme.so.7.0 00:02:18.901 SYMLINK libspdk_bdev_nvme.so 00:02:19.468 CC module/event/subsystems/iobuf/iobuf.o 00:02:19.468 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:19.468 CC module/event/subsystems/vmd/vmd.o 00:02:19.468 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:19.468 CC module/event/subsystems/scheduler/scheduler.o 00:02:19.468 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:19.468 CC module/event/subsystems/sock/sock.o 00:02:19.468 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:19.468 CC module/event/subsystems/keyring/keyring.o 00:02:19.727 LIB libspdk_event_sock.a 00:02:19.727 LIB libspdk_event_vmd.a 00:02:19.727 LIB libspdk_event_keyring.a 00:02:19.727 LIB libspdk_event_vhost_blk.a 00:02:19.727 LIB libspdk_event_scheduler.a 00:02:19.727 LIB libspdk_event_iobuf.a 00:02:19.727 LIB libspdk_event_vfu_tgt.a 00:02:19.727 SO libspdk_event_sock.so.5.0 00:02:19.727 SO libspdk_event_vmd.so.6.0 00:02:19.727 SO libspdk_event_keyring.so.1.0 00:02:19.727 SO libspdk_event_vhost_blk.so.3.0 00:02:19.727 SO libspdk_event_vfu_tgt.so.3.0 00:02:19.727 SO libspdk_event_scheduler.so.4.0 00:02:19.727 SO libspdk_event_iobuf.so.3.0 00:02:19.727 SYMLINK libspdk_event_sock.so 00:02:19.727 SYMLINK libspdk_event_vmd.so 00:02:19.727 SYMLINK libspdk_event_keyring.so 00:02:19.727 SYMLINK libspdk_event_vhost_blk.so 00:02:19.727 SYMLINK libspdk_event_scheduler.so 00:02:19.727 SYMLINK libspdk_event_vfu_tgt.so 00:02:19.727 SYMLINK libspdk_event_iobuf.so 00:02:19.985 CC module/event/subsystems/accel/accel.o 00:02:20.245 LIB libspdk_event_accel.a 00:02:20.245 SO libspdk_event_accel.so.6.0 00:02:20.245 SYMLINK libspdk_event_accel.so 00:02:20.504 CC module/event/subsystems/bdev/bdev.o 00:02:20.763 LIB libspdk_event_bdev.a 00:02:20.763 SO libspdk_event_bdev.so.6.0 00:02:20.763 SYMLINK libspdk_event_bdev.so 00:02:21.023 CC module/event/subsystems/nbd/nbd.o 00:02:21.023 CC module/event/subsystems/scsi/scsi.o 00:02:21.023 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:21.023 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:21.023 CC module/event/subsystems/ublk/ublk.o 00:02:21.282 LIB libspdk_event_nbd.a 00:02:21.282 LIB libspdk_event_ublk.a 00:02:21.282 LIB libspdk_event_scsi.a 00:02:21.282 SO libspdk_event_nbd.so.6.0 00:02:21.282 SO libspdk_event_scsi.so.6.0 00:02:21.282 SO libspdk_event_ublk.so.3.0 00:02:21.282 LIB libspdk_event_nvmf.a 00:02:21.282 SYMLINK libspdk_event_nbd.so 00:02:21.282 SYMLINK libspdk_event_ublk.so 00:02:21.282 SYMLINK libspdk_event_scsi.so 00:02:21.282 SO libspdk_event_nvmf.so.6.0 00:02:21.542 SYMLINK libspdk_event_nvmf.so 00:02:21.542 CC module/event/subsystems/iscsi/iscsi.o 00:02:21.882 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:21.882 LIB libspdk_event_vhost_scsi.a 00:02:21.882 LIB libspdk_event_iscsi.a 00:02:21.882 SO libspdk_event_vhost_scsi.so.3.0 00:02:21.882 SO libspdk_event_iscsi.so.6.0 00:02:21.882 SYMLINK libspdk_event_vhost_scsi.so 00:02:21.882 SYMLINK libspdk_event_iscsi.so 00:02:22.142 SO libspdk.so.6.0 00:02:22.142 SYMLINK libspdk.so 00:02:22.402 CXX app/trace/trace.o 00:02:22.402 TEST_HEADER include/spdk/accel.h 00:02:22.402 TEST_HEADER include/spdk/barrier.h 00:02:22.402 TEST_HEADER include/spdk/assert.h 00:02:22.402 TEST_HEADER include/spdk/accel_module.h 00:02:22.402 TEST_HEADER include/spdk/bdev_module.h 00:02:22.402 TEST_HEADER include/spdk/bdev.h 00:02:22.402 TEST_HEADER include/spdk/base64.h 00:02:22.402 TEST_HEADER include/spdk/bdev_zone.h 00:02:22.402 TEST_HEADER include/spdk/bit_pool.h 00:02:22.402 TEST_HEADER include/spdk/bit_array.h 00:02:22.402 CC app/trace_record/trace_record.o 00:02:22.402 TEST_HEADER include/spdk/blob_bdev.h 00:02:22.402 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:22.402 TEST_HEADER include/spdk/blob.h 00:02:22.402 TEST_HEADER include/spdk/blobfs.h 00:02:22.402 TEST_HEADER include/spdk/config.h 00:02:22.402 TEST_HEADER include/spdk/cpuset.h 00:02:22.402 TEST_HEADER include/spdk/conf.h 00:02:22.402 TEST_HEADER include/spdk/crc16.h 00:02:22.402 TEST_HEADER include/spdk/crc64.h 00:02:22.402 TEST_HEADER include/spdk/crc32.h 00:02:22.402 CC test/rpc_client/rpc_client_test.o 00:02:22.402 CC app/spdk_nvme_discover/discovery_aer.o 00:02:22.402 TEST_HEADER include/spdk/dif.h 00:02:22.402 CC app/spdk_nvme_perf/perf.o 00:02:22.402 TEST_HEADER include/spdk/dma.h 00:02:22.402 CC app/spdk_top/spdk_top.o 00:02:22.402 TEST_HEADER include/spdk/env.h 00:02:22.402 TEST_HEADER include/spdk/endian.h 00:02:22.402 TEST_HEADER include/spdk/env_dpdk.h 00:02:22.402 TEST_HEADER include/spdk/event.h 00:02:22.402 TEST_HEADER include/spdk/fd_group.h 00:02:22.402 TEST_HEADER include/spdk/fd.h 00:02:22.402 CC app/spdk_nvme_identify/identify.o 00:02:22.402 TEST_HEADER include/spdk/ftl.h 00:02:22.402 TEST_HEADER include/spdk/file.h 00:02:22.403 CC app/spdk_lspci/spdk_lspci.o 00:02:22.403 TEST_HEADER include/spdk/gpt_spec.h 00:02:22.403 TEST_HEADER include/spdk/hexlify.h 00:02:22.403 TEST_HEADER include/spdk/histogram_data.h 00:02:22.403 TEST_HEADER include/spdk/init.h 00:02:22.403 TEST_HEADER include/spdk/idxd.h 00:02:22.403 TEST_HEADER include/spdk/idxd_spec.h 00:02:22.403 TEST_HEADER include/spdk/ioat_spec.h 00:02:22.403 TEST_HEADER include/spdk/iscsi_spec.h 00:02:22.403 TEST_HEADER include/spdk/ioat.h 00:02:22.403 TEST_HEADER include/spdk/json.h 00:02:22.403 TEST_HEADER include/spdk/keyring.h 00:02:22.403 TEST_HEADER include/spdk/jsonrpc.h 00:02:22.403 TEST_HEADER include/spdk/likely.h 00:02:22.403 TEST_HEADER include/spdk/keyring_module.h 00:02:22.403 TEST_HEADER include/spdk/log.h 00:02:22.403 TEST_HEADER include/spdk/memory.h 00:02:22.403 TEST_HEADER include/spdk/lvol.h 00:02:22.403 TEST_HEADER include/spdk/mmio.h 00:02:22.403 TEST_HEADER include/spdk/notify.h 00:02:22.403 TEST_HEADER include/spdk/nbd.h 00:02:22.403 TEST_HEADER include/spdk/nvme.h 00:02:22.403 TEST_HEADER include/spdk/nvme_intel.h 00:02:22.403 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:22.403 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:22.403 TEST_HEADER include/spdk/nvme_spec.h 00:02:22.403 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:22.403 TEST_HEADER include/spdk/nvme_zns.h 00:02:22.403 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:22.403 TEST_HEADER include/spdk/nvmf.h 00:02:22.403 TEST_HEADER include/spdk/nvmf_transport.h 00:02:22.403 TEST_HEADER include/spdk/nvmf_spec.h 00:02:22.403 TEST_HEADER include/spdk/opal.h 00:02:22.403 TEST_HEADER include/spdk/opal_spec.h 00:02:22.403 TEST_HEADER include/spdk/pci_ids.h 00:02:22.403 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:22.403 TEST_HEADER include/spdk/queue.h 00:02:22.403 TEST_HEADER include/spdk/reduce.h 00:02:22.403 TEST_HEADER include/spdk/pipe.h 00:02:22.403 CC app/spdk_dd/spdk_dd.o 00:02:22.403 TEST_HEADER include/spdk/scsi.h 00:02:22.403 TEST_HEADER include/spdk/rpc.h 00:02:22.403 TEST_HEADER include/spdk/scheduler.h 00:02:22.403 TEST_HEADER include/spdk/sock.h 00:02:22.403 TEST_HEADER include/spdk/scsi_spec.h 00:02:22.403 TEST_HEADER include/spdk/stdinc.h 00:02:22.403 TEST_HEADER include/spdk/string.h 00:02:22.403 CC app/iscsi_tgt/iscsi_tgt.o 00:02:22.403 TEST_HEADER include/spdk/trace.h 00:02:22.403 TEST_HEADER include/spdk/thread.h 00:02:22.403 TEST_HEADER include/spdk/trace_parser.h 00:02:22.403 TEST_HEADER include/spdk/tree.h 00:02:22.403 TEST_HEADER include/spdk/ublk.h 00:02:22.403 TEST_HEADER include/spdk/util.h 00:02:22.403 TEST_HEADER include/spdk/version.h 00:02:22.403 TEST_HEADER include/spdk/uuid.h 00:02:22.403 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:22.403 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:22.403 CC app/nvmf_tgt/nvmf_main.o 00:02:22.403 TEST_HEADER include/spdk/vhost.h 00:02:22.403 TEST_HEADER include/spdk/vmd.h 00:02:22.403 TEST_HEADER include/spdk/xor.h 00:02:22.403 TEST_HEADER include/spdk/zipf.h 00:02:22.403 CXX test/cpp_headers/accel_module.o 00:02:22.403 CXX test/cpp_headers/accel.o 00:02:22.403 CXX test/cpp_headers/barrier.o 00:02:22.403 CXX test/cpp_headers/base64.o 00:02:22.403 CXX test/cpp_headers/bdev.o 00:02:22.403 CXX test/cpp_headers/assert.o 00:02:22.403 CXX test/cpp_headers/bit_pool.o 00:02:22.403 CXX test/cpp_headers/bit_array.o 00:02:22.403 CXX test/cpp_headers/bdev_zone.o 00:02:22.403 CXX test/cpp_headers/blob_bdev.o 00:02:22.403 CXX test/cpp_headers/bdev_module.o 00:02:22.403 CXX test/cpp_headers/blobfs_bdev.o 00:02:22.403 CXX test/cpp_headers/blob.o 00:02:22.403 CXX test/cpp_headers/blobfs.o 00:02:22.403 CXX test/cpp_headers/conf.o 00:02:22.403 CXX test/cpp_headers/cpuset.o 00:02:22.403 CXX test/cpp_headers/config.o 00:02:22.403 CXX test/cpp_headers/crc16.o 00:02:22.403 CXX test/cpp_headers/crc32.o 00:02:22.403 CXX test/cpp_headers/dif.o 00:02:22.403 CXX test/cpp_headers/dma.o 00:02:22.403 CXX test/cpp_headers/crc64.o 00:02:22.403 CXX test/cpp_headers/endian.o 00:02:22.403 CXX test/cpp_headers/env.o 00:02:22.403 CXX test/cpp_headers/event.o 00:02:22.403 CXX test/cpp_headers/fd.o 00:02:22.403 CC app/spdk_tgt/spdk_tgt.o 00:02:22.403 CXX test/cpp_headers/env_dpdk.o 00:02:22.403 CXX test/cpp_headers/fd_group.o 00:02:22.403 CXX test/cpp_headers/file.o 00:02:22.403 CXX test/cpp_headers/gpt_spec.o 00:02:22.403 CXX test/cpp_headers/hexlify.o 00:02:22.403 CXX test/cpp_headers/ftl.o 00:02:22.403 CXX test/cpp_headers/histogram_data.o 00:02:22.403 CXX test/cpp_headers/idxd_spec.o 00:02:22.403 CXX test/cpp_headers/ioat.o 00:02:22.403 CXX test/cpp_headers/idxd.o 00:02:22.403 CXX test/cpp_headers/init.o 00:02:22.682 CXX test/cpp_headers/iscsi_spec.o 00:02:22.682 CXX test/cpp_headers/jsonrpc.o 00:02:22.682 CXX test/cpp_headers/ioat_spec.o 00:02:22.682 CXX test/cpp_headers/json.o 00:02:22.682 CXX test/cpp_headers/keyring_module.o 00:02:22.682 CXX test/cpp_headers/keyring.o 00:02:22.682 CXX test/cpp_headers/likely.o 00:02:22.682 CXX test/cpp_headers/log.o 00:02:22.682 CXX test/cpp_headers/lvol.o 00:02:22.682 CXX test/cpp_headers/memory.o 00:02:22.682 CXX test/cpp_headers/mmio.o 00:02:22.682 CXX test/cpp_headers/nbd.o 00:02:22.682 CXX test/cpp_headers/notify.o 00:02:22.682 CXX test/cpp_headers/nvme_intel.o 00:02:22.682 CXX test/cpp_headers/nvme.o 00:02:22.682 CXX test/cpp_headers/nvme_ocssd.o 00:02:22.682 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:22.682 CXX test/cpp_headers/nvme_spec.o 00:02:22.682 CXX test/cpp_headers/nvme_zns.o 00:02:22.682 CXX test/cpp_headers/nvmf_cmd.o 00:02:22.682 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:22.682 CXX test/cpp_headers/nvmf.o 00:02:22.682 CXX test/cpp_headers/nvmf_spec.o 00:02:22.682 CXX test/cpp_headers/nvmf_transport.o 00:02:22.682 CXX test/cpp_headers/opal.o 00:02:22.682 CXX test/cpp_headers/opal_spec.o 00:02:22.682 CXX test/cpp_headers/pci_ids.o 00:02:22.682 CXX test/cpp_headers/queue.o 00:02:22.682 CXX test/cpp_headers/pipe.o 00:02:22.682 CXX test/cpp_headers/reduce.o 00:02:22.682 CC examples/util/zipf/zipf.o 00:02:22.682 CXX test/cpp_headers/rpc.o 00:02:22.682 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:22.682 CC examples/ioat/perf/perf.o 00:02:22.682 CC test/env/vtophys/vtophys.o 00:02:22.682 CC test/env/pci/pci_ut.o 00:02:22.682 CC examples/ioat/verify/verify.o 00:02:22.682 CC test/app/stub/stub.o 00:02:22.682 CC app/fio/nvme/fio_plugin.o 00:02:22.682 CXX test/cpp_headers/scheduler.o 00:02:22.682 CC test/thread/poller_perf/poller_perf.o 00:02:22.682 CC test/app/jsoncat/jsoncat.o 00:02:22.682 CC test/env/memory/memory_ut.o 00:02:22.682 CC test/app/histogram_perf/histogram_perf.o 00:02:22.682 CC test/app/bdev_svc/bdev_svc.o 00:02:22.682 CC test/dma/test_dma/test_dma.o 00:02:22.682 CC app/fio/bdev/fio_plugin.o 00:02:22.977 LINK rpc_client_test 00:02:22.977 LINK spdk_lspci 00:02:22.977 LINK spdk_nvme_discover 00:02:22.977 CC test/env/mem_callbacks/mem_callbacks.o 00:02:23.236 CXX test/cpp_headers/scsi.o 00:02:23.236 CXX test/cpp_headers/scsi_spec.o 00:02:23.236 CXX test/cpp_headers/sock.o 00:02:23.236 CXX test/cpp_headers/stdinc.o 00:02:23.236 LINK iscsi_tgt 00:02:23.236 CXX test/cpp_headers/string.o 00:02:23.236 CXX test/cpp_headers/thread.o 00:02:23.236 CXX test/cpp_headers/trace.o 00:02:23.236 CXX test/cpp_headers/trace_parser.o 00:02:23.236 CXX test/cpp_headers/ublk.o 00:02:23.236 CXX test/cpp_headers/util.o 00:02:23.236 CXX test/cpp_headers/uuid.o 00:02:23.236 CXX test/cpp_headers/tree.o 00:02:23.236 CXX test/cpp_headers/version.o 00:02:23.236 LINK spdk_tgt 00:02:23.236 CXX test/cpp_headers/vfio_user_pci.o 00:02:23.236 LINK jsoncat 00:02:23.236 LINK interrupt_tgt 00:02:23.236 CXX test/cpp_headers/vfio_user_spec.o 00:02:23.236 CXX test/cpp_headers/vmd.o 00:02:23.236 LINK histogram_perf 00:02:23.236 CXX test/cpp_headers/vhost.o 00:02:23.236 CXX test/cpp_headers/zipf.o 00:02:23.236 CXX test/cpp_headers/xor.o 00:02:23.236 LINK nvmf_tgt 00:02:23.236 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:23.236 LINK stub 00:02:23.236 LINK bdev_svc 00:02:23.236 LINK verify 00:02:23.236 LINK spdk_trace_record 00:02:23.236 LINK vtophys 00:02:23.236 LINK spdk_dd 00:02:23.236 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:23.236 LINK zipf 00:02:23.236 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:23.236 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:23.236 LINK env_dpdk_post_init 00:02:23.236 LINK poller_perf 00:02:23.236 LINK spdk_trace 00:02:23.236 LINK ioat_perf 00:02:23.495 LINK test_dma 00:02:23.495 LINK pci_ut 00:02:23.753 LINK nvme_fuzz 00:02:23.753 LINK spdk_nvme 00:02:23.753 LINK spdk_bdev 00:02:23.753 LINK vhost_fuzz 00:02:23.753 CC app/vhost/vhost.o 00:02:23.753 LINK spdk_nvme_identify 00:02:23.753 LINK spdk_top 00:02:23.753 CC examples/vmd/led/led.o 00:02:23.753 CC examples/idxd/perf/perf.o 00:02:23.753 LINK spdk_nvme_perf 00:02:23.753 CC examples/sock/hello_world/hello_sock.o 00:02:23.753 CC examples/vmd/lsvmd/lsvmd.o 00:02:23.753 CC test/event/reactor/reactor.o 00:02:23.753 CC test/event/event_perf/event_perf.o 00:02:23.753 CC test/event/reactor_perf/reactor_perf.o 00:02:23.753 LINK mem_callbacks 00:02:23.753 CC examples/thread/thread/thread_ex.o 00:02:23.753 CC test/event/app_repeat/app_repeat.o 00:02:23.753 CC test/event/scheduler/scheduler.o 00:02:24.012 LINK led 00:02:24.012 LINK lsvmd 00:02:24.012 LINK vhost 00:02:24.012 LINK reactor 00:02:24.012 LINK reactor_perf 00:02:24.012 LINK event_perf 00:02:24.012 CC test/nvme/reset/reset.o 00:02:24.012 CC test/nvme/reserve/reserve.o 00:02:24.012 CC test/nvme/sgl/sgl.o 00:02:24.012 LINK hello_sock 00:02:24.012 CC test/nvme/startup/startup.o 00:02:24.012 CC test/nvme/e2edp/nvme_dp.o 00:02:24.012 CC test/nvme/err_injection/err_injection.o 00:02:24.012 CC test/nvme/cuse/cuse.o 00:02:24.012 CC test/nvme/connect_stress/connect_stress.o 00:02:24.012 CC test/nvme/fused_ordering/fused_ordering.o 00:02:24.012 CC test/nvme/overhead/overhead.o 00:02:24.012 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:24.012 CC test/nvme/simple_copy/simple_copy.o 00:02:24.012 CC test/nvme/aer/aer.o 00:02:24.012 CC test/nvme/compliance/nvme_compliance.o 00:02:24.012 CC test/nvme/boot_partition/boot_partition.o 00:02:24.012 LINK app_repeat 00:02:24.012 CC test/nvme/fdp/fdp.o 00:02:24.012 CC test/accel/dif/dif.o 00:02:24.012 CC test/blobfs/mkfs/mkfs.o 00:02:24.012 LINK idxd_perf 00:02:24.012 LINK thread 00:02:24.012 LINK scheduler 00:02:24.012 LINK memory_ut 00:02:24.012 CC test/lvol/esnap/esnap.o 00:02:24.272 LINK boot_partition 00:02:24.272 LINK startup 00:02:24.272 LINK doorbell_aers 00:02:24.272 LINK fused_ordering 00:02:24.272 LINK err_injection 00:02:24.272 LINK reserve 00:02:24.272 LINK connect_stress 00:02:24.272 LINK reset 00:02:24.272 LINK simple_copy 00:02:24.272 LINK sgl 00:02:24.272 LINK nvme_dp 00:02:24.272 LINK mkfs 00:02:24.272 LINK overhead 00:02:24.272 LINK aer 00:02:24.272 LINK nvme_compliance 00:02:24.272 LINK fdp 00:02:24.272 CC examples/nvme/hotplug/hotplug.o 00:02:24.272 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:24.272 CC examples/nvme/reconnect/reconnect.o 00:02:24.272 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:24.272 CC examples/nvme/arbitration/arbitration.o 00:02:24.272 CC examples/nvme/hello_world/hello_world.o 00:02:24.272 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:24.272 CC examples/nvme/abort/abort.o 00:02:24.272 LINK dif 00:02:24.530 LINK cmb_copy 00:02:24.530 LINK pmr_persistence 00:02:24.530 CC examples/accel/perf/accel_perf.o 00:02:24.530 LINK hotplug 00:02:24.530 LINK hello_world 00:02:24.530 CC examples/blob/hello_world/hello_blob.o 00:02:24.530 CC examples/blob/cli/blobcli.o 00:02:24.530 LINK iscsi_fuzz 00:02:24.530 LINK arbitration 00:02:24.530 LINK reconnect 00:02:24.530 LINK abort 00:02:24.789 LINK nvme_manage 00:02:24.789 LINK hello_blob 00:02:24.789 CC test/bdev/bdevio/bdevio.o 00:02:24.789 LINK accel_perf 00:02:25.049 LINK blobcli 00:02:25.049 LINK cuse 00:02:25.308 LINK bdevio 00:02:25.308 CC examples/bdev/hello_world/hello_bdev.o 00:02:25.308 CC examples/bdev/bdevperf/bdevperf.o 00:02:25.566 LINK hello_bdev 00:02:25.824 LINK bdevperf 00:02:26.391 CC examples/nvmf/nvmf/nvmf.o 00:02:26.650 LINK nvmf 00:02:27.587 LINK esnap 00:02:27.847 00:02:27.847 real 0m45.221s 00:02:27.847 user 6m32.498s 00:02:27.847 sys 3m29.038s 00:02:27.847 07:38:12 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:27.847 07:38:12 make -- common/autotest_common.sh@10 -- $ set +x 00:02:27.847 ************************************ 00:02:27.847 END TEST make 00:02:27.847 ************************************ 00:02:27.847 07:38:12 -- common/autotest_common.sh@1142 -- $ return 0 00:02:27.847 07:38:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:27.847 07:38:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:27.847 07:38:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:27.847 07:38:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.847 07:38:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:27.847 07:38:12 -- pm/common@44 -- $ pid=2961192 00:02:27.847 07:38:12 -- pm/common@50 -- $ kill -TERM 2961192 00:02:27.847 07:38:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.847 07:38:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:27.847 07:38:12 -- pm/common@44 -- $ pid=2961194 00:02:27.847 07:38:12 -- pm/common@50 -- $ kill -TERM 2961194 00:02:27.847 07:38:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.847 07:38:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:27.847 07:38:12 -- pm/common@44 -- $ pid=2961195 00:02:27.847 07:38:12 -- pm/common@50 -- $ kill -TERM 2961195 00:02:27.847 07:38:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.847 07:38:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:27.847 07:38:12 -- pm/common@44 -- $ pid=2961218 00:02:27.847 07:38:12 -- pm/common@50 -- $ sudo -E kill -TERM 2961218 00:02:28.108 07:38:12 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:28.108 07:38:12 -- nvmf/common.sh@7 -- # uname -s 00:02:28.108 07:38:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:28.108 07:38:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:28.108 07:38:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:28.108 07:38:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:28.108 07:38:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:28.108 07:38:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:28.108 07:38:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:28.108 07:38:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:28.108 07:38:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:28.108 07:38:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:28.108 07:38:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:28.108 07:38:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:28.108 07:38:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:28.108 07:38:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:28.108 07:38:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:28.108 07:38:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:28.108 07:38:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:28.108 07:38:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:28.108 07:38:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:28.108 07:38:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:28.108 07:38:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.108 07:38:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.108 07:38:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.108 07:38:12 -- paths/export.sh@5 -- # export PATH 00:02:28.108 07:38:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.108 07:38:12 -- nvmf/common.sh@47 -- # : 0 00:02:28.108 07:38:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:28.108 07:38:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:28.108 07:38:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:28.108 07:38:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:28.108 07:38:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:28.108 07:38:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:28.108 07:38:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:28.108 07:38:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:28.108 07:38:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:28.108 07:38:12 -- spdk/autotest.sh@32 -- # uname -s 00:02:28.108 07:38:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:28.108 07:38:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:28.108 07:38:12 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:28.108 07:38:12 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:28.108 07:38:12 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:28.108 07:38:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:28.108 07:38:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:28.108 07:38:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:28.108 07:38:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:28.108 07:38:12 -- spdk/autotest.sh@48 -- # udevadm_pid=3020428 00:02:28.108 07:38:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:28.108 07:38:12 -- pm/common@17 -- # local monitor 00:02:28.108 07:38:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.108 07:38:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.108 07:38:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.108 07:38:12 -- pm/common@21 -- # date +%s 00:02:28.108 07:38:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.108 07:38:12 -- pm/common@21 -- # date +%s 00:02:28.108 07:38:12 -- pm/common@25 -- # sleep 1 00:02:28.108 07:38:12 -- pm/common@21 -- # date +%s 00:02:28.108 07:38:12 -- pm/common@21 -- # date +%s 00:02:28.108 07:38:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721021892 00:02:28.108 07:38:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721021892 00:02:28.108 07:38:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721021892 00:02:28.108 07:38:12 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721021892 00:02:28.108 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721021892_collect-vmstat.pm.log 00:02:28.108 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721021892_collect-cpu-load.pm.log 00:02:28.108 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721021892_collect-cpu-temp.pm.log 00:02:28.108 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721021892_collect-bmc-pm.bmc.pm.log 00:02:29.048 07:38:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:29.048 07:38:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:29.048 07:38:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:29.048 07:38:13 -- common/autotest_common.sh@10 -- # set +x 00:02:29.048 07:38:13 -- spdk/autotest.sh@59 -- # create_test_list 00:02:29.048 07:38:13 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:29.048 07:38:13 -- common/autotest_common.sh@10 -- # set +x 00:02:29.048 07:38:13 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:29.048 07:38:13 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:29.048 07:38:13 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:29.048 07:38:13 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:29.048 07:38:13 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:29.048 07:38:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:29.048 07:38:13 -- common/autotest_common.sh@1455 -- # uname 00:02:29.048 07:38:13 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:29.048 07:38:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:29.048 07:38:13 -- common/autotest_common.sh@1475 -- # uname 00:02:29.048 07:38:13 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:29.048 07:38:13 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:29.048 07:38:13 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:29.048 07:38:13 -- spdk/autotest.sh@72 -- # hash lcov 00:02:29.048 07:38:13 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:29.048 07:38:13 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:29.048 --rc lcov_branch_coverage=1 00:02:29.048 --rc lcov_function_coverage=1 00:02:29.048 --rc genhtml_branch_coverage=1 00:02:29.048 --rc genhtml_function_coverage=1 00:02:29.048 --rc genhtml_legend=1 00:02:29.048 --rc geninfo_all_blocks=1 00:02:29.048 ' 00:02:29.048 07:38:13 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:29.048 --rc lcov_branch_coverage=1 00:02:29.048 --rc lcov_function_coverage=1 00:02:29.048 --rc genhtml_branch_coverage=1 00:02:29.048 --rc genhtml_function_coverage=1 00:02:29.048 --rc genhtml_legend=1 00:02:29.048 --rc geninfo_all_blocks=1 00:02:29.048 ' 00:02:29.049 07:38:13 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:29.049 --rc lcov_branch_coverage=1 00:02:29.049 --rc lcov_function_coverage=1 00:02:29.049 --rc genhtml_branch_coverage=1 00:02:29.049 --rc genhtml_function_coverage=1 00:02:29.049 --rc genhtml_legend=1 00:02:29.049 --rc geninfo_all_blocks=1 00:02:29.049 --no-external' 00:02:29.049 07:38:13 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:29.049 --rc lcov_branch_coverage=1 00:02:29.049 --rc lcov_function_coverage=1 00:02:29.049 --rc genhtml_branch_coverage=1 00:02:29.049 --rc genhtml_function_coverage=1 00:02:29.049 --rc genhtml_legend=1 00:02:29.049 --rc geninfo_all_blocks=1 00:02:29.049 --no-external' 00:02:29.049 07:38:13 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:29.307 lcov: LCOV version 1.14 00:02:29.307 07:38:13 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:33.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:33.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:33.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:33.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:33.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:33.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:33.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:33.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:33.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:33.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:33.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:33.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:33.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:33.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:33.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:33.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:33.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:33.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:33.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:33.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:33.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:33.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:33.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:33.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:33.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:33.496 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:33.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:33.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:33.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:33.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:33.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:33.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:33.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:33.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:33.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:33.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:33.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:33.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:33.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:33.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:33.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:33.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:33.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:33.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:33.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:33.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:33.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:33.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:33.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:33.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:33.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:33.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:33.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:33.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:33.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:33.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:33.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:33.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:33.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:33.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:33.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:33.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:33.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:33.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:33.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:33.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:33.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:48.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:48.375 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:53.646 07:38:38 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:53.646 07:38:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:53.646 07:38:38 -- common/autotest_common.sh@10 -- # set +x 00:02:53.646 07:38:38 -- spdk/autotest.sh@91 -- # rm -f 00:02:53.646 07:38:38 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:56.982 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:56.982 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:56.982 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:56.982 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:56.982 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:56.982 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:56.982 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:56.982 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:56.982 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:56.982 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:56.982 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:56.982 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:56.982 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:56.982 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:56.982 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:56.982 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:56.982 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:56.982 07:38:41 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:56.982 07:38:41 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:56.982 07:38:41 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:56.982 07:38:41 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:56.982 07:38:41 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:56.982 07:38:41 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:56.982 07:38:41 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:56.982 07:38:41 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:56.982 07:38:41 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:56.982 07:38:41 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:56.982 07:38:41 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:56.982 07:38:41 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:56.982 07:38:41 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:56.982 07:38:41 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:56.982 07:38:41 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:56.982 No valid GPT data, bailing 00:02:56.982 07:38:41 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:56.982 07:38:41 -- scripts/common.sh@391 -- # pt= 00:02:56.982 07:38:41 -- scripts/common.sh@392 -- # return 1 00:02:56.982 07:38:41 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:56.982 1+0 records in 00:02:56.982 1+0 records out 00:02:56.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00558761 s, 188 MB/s 00:02:56.982 07:38:41 -- spdk/autotest.sh@118 -- # sync 00:02:56.982 07:38:41 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:56.982 07:38:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:56.982 07:38:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:02.256 07:38:46 -- spdk/autotest.sh@124 -- # uname -s 00:03:02.256 07:38:46 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:02.256 07:38:46 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:02.256 07:38:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:02.256 07:38:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:02.256 07:38:46 -- common/autotest_common.sh@10 -- # set +x 00:03:02.256 ************************************ 00:03:02.256 START TEST setup.sh 00:03:02.256 ************************************ 00:03:02.256 07:38:46 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:02.256 * Looking for test storage... 00:03:02.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:02.256 07:38:46 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:02.256 07:38:46 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:02.256 07:38:46 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:02.256 07:38:46 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:02.256 07:38:46 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:02.256 07:38:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:02.256 ************************************ 00:03:02.256 START TEST acl 00:03:02.256 ************************************ 00:03:02.256 07:38:46 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:02.515 * Looking for test storage... 00:03:02.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:02.516 07:38:47 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:02.516 07:38:47 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:02.516 07:38:47 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:02.516 07:38:47 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:02.516 07:38:47 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:02.516 07:38:47 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:02.516 07:38:47 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:02.516 07:38:47 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:02.516 07:38:47 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:02.516 07:38:47 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:02.516 07:38:47 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:02.516 07:38:47 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:02.516 07:38:47 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:02.516 07:38:47 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:02.516 07:38:47 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:02.516 07:38:47 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.810 07:38:50 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:05.810 07:38:50 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:05.810 07:38:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.810 07:38:50 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:05.810 07:38:50 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.810 07:38:50 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:08.348 Hugepages 00:03:08.348 node hugesize free / total 00:03:08.348 07:38:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:08.348 07:38:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:08.348 07:38:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.348 07:38:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:08.348 07:38:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:08.348 07:38:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.348 07:38:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:08.348 07:38:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:08.348 07:38:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.348 00:03:08.348 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:08.348 07:38:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:08.348 07:38:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:08.348 07:38:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:08.348 07:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:08.607 07:38:53 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:08.607 07:38:53 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:08.607 07:38:53 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.607 07:38:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:08.607 ************************************ 00:03:08.607 START TEST denied 00:03:08.607 ************************************ 00:03:08.607 07:38:53 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:08.607 07:38:53 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:08.607 07:38:53 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:08.607 07:38:53 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:08.607 07:38:53 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.607 07:38:53 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:11.895 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:11.896 07:38:56 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:11.896 07:38:56 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:11.896 07:38:56 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:11.896 07:38:56 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:11.896 07:38:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:11.896 07:38:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:11.896 07:38:56 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:11.896 07:38:56 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:11.896 07:38:56 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:11.896 07:38:56 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:16.091 00:03:16.092 real 0m7.176s 00:03:16.092 user 0m2.283s 00:03:16.092 sys 0m4.163s 00:03:16.092 07:39:00 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:16.092 07:39:00 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:16.092 ************************************ 00:03:16.092 END TEST denied 00:03:16.092 ************************************ 00:03:16.092 07:39:00 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:16.092 07:39:00 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:16.092 07:39:00 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:16.092 07:39:00 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.092 07:39:00 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:16.092 ************************************ 00:03:16.092 START TEST allowed 00:03:16.092 ************************************ 00:03:16.092 07:39:00 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:16.092 07:39:00 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:16.092 07:39:00 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:16.092 07:39:00 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:16.092 07:39:00 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.092 07:39:00 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:20.293 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:20.293 07:39:04 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:20.293 07:39:04 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:20.293 07:39:04 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:20.293 07:39:04 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:20.294 07:39:04 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:22.828 00:03:22.828 real 0m7.078s 00:03:22.828 user 0m2.220s 00:03:22.828 sys 0m4.025s 00:03:22.828 07:39:07 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:22.828 07:39:07 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:22.828 ************************************ 00:03:22.828 END TEST allowed 00:03:22.828 ************************************ 00:03:22.828 07:39:07 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:22.828 00:03:22.828 real 0m20.589s 00:03:22.828 user 0m6.863s 00:03:22.828 sys 0m12.382s 00:03:22.828 07:39:07 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:22.828 07:39:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:22.828 ************************************ 00:03:22.828 END TEST acl 00:03:22.828 ************************************ 00:03:23.088 07:39:07 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:23.088 07:39:07 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:23.088 07:39:07 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:23.088 07:39:07 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.088 07:39:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:23.088 ************************************ 00:03:23.088 START TEST hugepages 00:03:23.088 ************************************ 00:03:23.088 07:39:07 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:23.088 * Looking for test storage... 00:03:23.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173364560 kB' 'MemAvailable: 176232780 kB' 'Buffers: 4928 kB' 'Cached: 10159104 kB' 'SwapCached: 0 kB' 'Active: 7174848 kB' 'Inactive: 3508388 kB' 'Active(anon): 6782840 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523128 kB' 'Mapped: 220456 kB' 'Shmem: 6263636 kB' 'KReclaimable: 224552 kB' 'Slab: 772712 kB' 'SReclaimable: 224552 kB' 'SUnreclaim: 548160 kB' 'KernelStack: 20592 kB' 'PageTables: 8964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982028 kB' 'Committed_AS: 8289652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314908 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.088 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:23.089 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:23.090 07:39:07 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:23.090 07:39:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:23.090 07:39:07 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.090 07:39:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:23.090 ************************************ 00:03:23.090 START TEST default_setup 00:03:23.090 ************************************ 00:03:23.090 07:39:07 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:23.090 07:39:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:23.090 07:39:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:23.090 07:39:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:23.090 07:39:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:23.090 07:39:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:23.090 07:39:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:23.090 07:39:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:23.090 07:39:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:23.090 07:39:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:23.090 07:39:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:23.090 07:39:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:23.090 07:39:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:23.090 07:39:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:23.090 07:39:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:23.090 07:39:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:23.090 07:39:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:23.090 07:39:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:23.090 07:39:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:23.090 07:39:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:23.090 07:39:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:23.090 07:39:07 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.090 07:39:07 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:26.390 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:26.390 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:26.390 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:26.390 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:26.390 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:26.390 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:26.390 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:26.391 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:26.391 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:26.391 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:26.391 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:26.391 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:26.391 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:26.391 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:26.391 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:26.391 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:27.021 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175550404 kB' 'MemAvailable: 178418592 kB' 'Buffers: 4928 kB' 'Cached: 10159204 kB' 'SwapCached: 0 kB' 'Active: 7181508 kB' 'Inactive: 3508388 kB' 'Active(anon): 6789500 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529040 kB' 'Mapped: 219516 kB' 'Shmem: 6263736 kB' 'KReclaimable: 224488 kB' 'Slab: 771048 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546560 kB' 'KernelStack: 20784 kB' 'PageTables: 9868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8299184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314984 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.022 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175551164 kB' 'MemAvailable: 178419352 kB' 'Buffers: 4928 kB' 'Cached: 10159208 kB' 'SwapCached: 0 kB' 'Active: 7181188 kB' 'Inactive: 3508388 kB' 'Active(anon): 6789180 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528664 kB' 'Mapped: 219512 kB' 'Shmem: 6263740 kB' 'KReclaimable: 224488 kB' 'Slab: 771068 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546580 kB' 'KernelStack: 20576 kB' 'PageTables: 9372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8299440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314872 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.023 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175549720 kB' 'MemAvailable: 178417908 kB' 'Buffers: 4928 kB' 'Cached: 10159208 kB' 'SwapCached: 0 kB' 'Active: 7181064 kB' 'Inactive: 3508388 kB' 'Active(anon): 6789056 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528540 kB' 'Mapped: 219512 kB' 'Shmem: 6263740 kB' 'KReclaimable: 224488 kB' 'Slab: 771068 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546580 kB' 'KernelStack: 20576 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8299460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314888 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.024 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.025 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:27.026 nr_hugepages=1024 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.026 resv_hugepages=0 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.026 surplus_hugepages=0 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:27.026 anon_hugepages=0 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.026 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175550336 kB' 'MemAvailable: 178418524 kB' 'Buffers: 4928 kB' 'Cached: 10159256 kB' 'SwapCached: 0 kB' 'Active: 7180864 kB' 'Inactive: 3508388 kB' 'Active(anon): 6788856 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528344 kB' 'Mapped: 219512 kB' 'Shmem: 6263788 kB' 'KReclaimable: 224488 kB' 'Slab: 771060 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546572 kB' 'KernelStack: 20544 kB' 'PageTables: 9116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8297992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314808 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.027 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85913332 kB' 'MemUsed: 11749352 kB' 'SwapCached: 0 kB' 'Active: 4789580 kB' 'Inactive: 3336368 kB' 'Active(anon): 4632040 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7946156 kB' 'Mapped: 71568 kB' 'AnonPages: 183056 kB' 'Shmem: 4452248 kB' 'KernelStack: 10392 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121872 kB' 'Slab: 382192 kB' 'SReclaimable: 121872 kB' 'SUnreclaim: 260320 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.028 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.029 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.289 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.290 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.290 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.290 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.290 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:27.290 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:27.290 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:27.290 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.290 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:27.290 07:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:27.290 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.290 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.290 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.290 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.290 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:27.290 node0=1024 expecting 1024 00:03:27.290 07:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:27.290 00:03:27.290 real 0m3.959s 00:03:27.290 user 0m1.273s 00:03:27.290 sys 0m1.964s 00:03:27.290 07:39:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:27.290 07:39:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:27.290 ************************************ 00:03:27.290 END TEST default_setup 00:03:27.290 ************************************ 00:03:27.290 07:39:11 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:27.290 07:39:11 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:27.290 07:39:11 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.290 07:39:11 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.290 07:39:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:27.290 ************************************ 00:03:27.290 START TEST per_node_1G_alloc 00:03:27.290 ************************************ 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.290 07:39:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:29.824 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:29.824 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:29.824 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:29.824 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:29.824 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:29.824 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:29.824 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:29.824 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:29.824 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:29.824 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:29.824 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:29.824 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:29.824 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:29.824 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:29.824 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.089 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.089 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175557628 kB' 'MemAvailable: 178425816 kB' 'Buffers: 4928 kB' 'Cached: 10159344 kB' 'SwapCached: 0 kB' 'Active: 7181604 kB' 'Inactive: 3508388 kB' 'Active(anon): 6789596 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529036 kB' 'Mapped: 219532 kB' 'Shmem: 6263876 kB' 'KReclaimable: 224488 kB' 'Slab: 770700 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546212 kB' 'KernelStack: 20512 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8297468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315000 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.089 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:30.090 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175559216 kB' 'MemAvailable: 178427404 kB' 'Buffers: 4928 kB' 'Cached: 10159352 kB' 'SwapCached: 0 kB' 'Active: 7181420 kB' 'Inactive: 3508388 kB' 'Active(anon): 6789412 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528952 kB' 'Mapped: 219520 kB' 'Shmem: 6263884 kB' 'KReclaimable: 224488 kB' 'Slab: 770916 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546428 kB' 'KernelStack: 20480 kB' 'PageTables: 8872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8297488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314968 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.091 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.092 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175558964 kB' 'MemAvailable: 178427152 kB' 'Buffers: 4928 kB' 'Cached: 10159364 kB' 'SwapCached: 0 kB' 'Active: 7181200 kB' 'Inactive: 3508388 kB' 'Active(anon): 6789192 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528672 kB' 'Mapped: 219520 kB' 'Shmem: 6263896 kB' 'KReclaimable: 224488 kB' 'Slab: 770916 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546428 kB' 'KernelStack: 20480 kB' 'PageTables: 8872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8297512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314968 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.093 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.094 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.095 nr_hugepages=1024 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.095 resv_hugepages=0 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.095 surplus_hugepages=0 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.095 anon_hugepages=0 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175558940 kB' 'MemAvailable: 178427128 kB' 'Buffers: 4928 kB' 'Cached: 10159412 kB' 'SwapCached: 0 kB' 'Active: 7180852 kB' 'Inactive: 3508388 kB' 'Active(anon): 6788844 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528256 kB' 'Mapped: 219520 kB' 'Shmem: 6263944 kB' 'KReclaimable: 224488 kB' 'Slab: 770916 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546428 kB' 'KernelStack: 20448 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8297532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314968 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.095 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.096 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.097 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.359 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.359 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:30.359 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.359 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:30.359 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.359 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:30.359 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.359 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86969444 kB' 'MemUsed: 10693240 kB' 'SwapCached: 0 kB' 'Active: 4789756 kB' 'Inactive: 3336368 kB' 'Active(anon): 4632216 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7946304 kB' 'Mapped: 71560 kB' 'AnonPages: 183040 kB' 'Shmem: 4452396 kB' 'KernelStack: 10344 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121872 kB' 'Slab: 381956 kB' 'SReclaimable: 121872 kB' 'SUnreclaim: 260084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.360 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.361 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88588864 kB' 'MemUsed: 5129604 kB' 'SwapCached: 0 kB' 'Active: 2391364 kB' 'Inactive: 172020 kB' 'Active(anon): 2156896 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 172020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2218040 kB' 'Mapped: 147960 kB' 'AnonPages: 345544 kB' 'Shmem: 1811552 kB' 'KernelStack: 10104 kB' 'PageTables: 4720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102616 kB' 'Slab: 388960 kB' 'SReclaimable: 102616 kB' 'SUnreclaim: 286344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.362 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:30.363 node0=512 expecting 512 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:30.363 node1=512 expecting 512 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:30.363 00:03:30.363 real 0m3.047s 00:03:30.363 user 0m1.252s 00:03:30.363 sys 0m1.864s 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.363 07:39:14 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:30.363 ************************************ 00:03:30.363 END TEST per_node_1G_alloc 00:03:30.363 ************************************ 00:03:30.363 07:39:14 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:30.363 07:39:14 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:30.363 07:39:14 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.363 07:39:14 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.363 07:39:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:30.363 ************************************ 00:03:30.363 START TEST even_2G_alloc 00:03:30.363 ************************************ 00:03:30.363 07:39:14 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:30.363 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:30.363 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:30.363 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:30.363 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.363 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:30.363 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:30.363 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.363 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.363 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:30.363 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.363 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.363 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.363 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.363 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:30.363 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.363 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:30.363 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:30.363 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:30.363 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.364 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:30.364 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:30.364 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:30.364 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.364 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:30.364 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:30.364 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:30.364 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.364 07:39:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:32.905 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:32.905 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:32.905 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:32.905 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:32.905 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:33.170 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:33.170 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:33.170 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:33.170 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:33.170 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:33.170 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:33.170 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:33.170 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:33.170 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:33.170 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:33.170 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:33.170 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:33.170 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:33.170 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:33.170 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:33.170 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:33.170 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:33.170 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:33.170 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:33.170 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:33.170 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:33.170 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:33.170 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:33.170 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:33.170 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.170 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.170 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.170 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.170 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175572776 kB' 'MemAvailable: 178440964 kB' 'Buffers: 4928 kB' 'Cached: 10159492 kB' 'SwapCached: 0 kB' 'Active: 7180368 kB' 'Inactive: 3508388 kB' 'Active(anon): 6788360 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527084 kB' 'Mapped: 218576 kB' 'Shmem: 6264024 kB' 'KReclaimable: 224488 kB' 'Slab: 770980 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546492 kB' 'KernelStack: 20400 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8286384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314904 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.171 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175575396 kB' 'MemAvailable: 178443584 kB' 'Buffers: 4928 kB' 'Cached: 10159496 kB' 'SwapCached: 0 kB' 'Active: 7179448 kB' 'Inactive: 3508388 kB' 'Active(anon): 6787440 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526644 kB' 'Mapped: 218472 kB' 'Shmem: 6264028 kB' 'KReclaimable: 224488 kB' 'Slab: 770960 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546472 kB' 'KernelStack: 20400 kB' 'PageTables: 8544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8286900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314872 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.172 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.173 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.174 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175574396 kB' 'MemAvailable: 178442584 kB' 'Buffers: 4928 kB' 'Cached: 10159508 kB' 'SwapCached: 0 kB' 'Active: 7179668 kB' 'Inactive: 3508388 kB' 'Active(anon): 6787660 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526904 kB' 'Mapped: 218472 kB' 'Shmem: 6264040 kB' 'KReclaimable: 224488 kB' 'Slab: 770960 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546472 kB' 'KernelStack: 20416 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8286920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314872 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.175 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.176 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:33.177 nr_hugepages=1024 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:33.177 resv_hugepages=0 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:33.177 surplus_hugepages=0 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:33.177 anon_hugepages=0 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175573636 kB' 'MemAvailable: 178441824 kB' 'Buffers: 4928 kB' 'Cached: 10159508 kB' 'SwapCached: 0 kB' 'Active: 7180224 kB' 'Inactive: 3508388 kB' 'Active(anon): 6788216 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527540 kB' 'Mapped: 218472 kB' 'Shmem: 6264040 kB' 'KReclaimable: 224488 kB' 'Slab: 770960 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546472 kB' 'KernelStack: 20448 kB' 'PageTables: 8716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8287696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314856 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.177 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.178 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.440 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.440 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.440 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.440 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.440 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.440 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.440 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.440 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.440 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.440 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.440 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.440 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.440 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.440 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.440 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.440 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86975140 kB' 'MemUsed: 10687544 kB' 'SwapCached: 0 kB' 'Active: 4790252 kB' 'Inactive: 3336368 kB' 'Active(anon): 4632712 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7946444 kB' 'Mapped: 71252 kB' 'AnonPages: 183384 kB' 'Shmem: 4452536 kB' 'KernelStack: 10296 kB' 'PageTables: 3864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121872 kB' 'Slab: 382044 kB' 'SReclaimable: 121872 kB' 'SUnreclaim: 260172 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.441 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.442 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88601212 kB' 'MemUsed: 5117256 kB' 'SwapCached: 0 kB' 'Active: 2389396 kB' 'Inactive: 172020 kB' 'Active(anon): 2154928 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 172020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2218052 kB' 'Mapped: 147220 kB' 'AnonPages: 343456 kB' 'Shmem: 1811564 kB' 'KernelStack: 10104 kB' 'PageTables: 4696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102616 kB' 'Slab: 388908 kB' 'SReclaimable: 102616 kB' 'SUnreclaim: 286292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.443 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:33.444 node0=512 expecting 512 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:33.444 node1=512 expecting 512 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:33.444 00:03:33.444 real 0m3.028s 00:03:33.444 user 0m1.212s 00:03:33.444 sys 0m1.886s 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.444 07:39:17 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:33.444 ************************************ 00:03:33.444 END TEST even_2G_alloc 00:03:33.444 ************************************ 00:03:33.444 07:39:18 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:33.444 07:39:18 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:33.444 07:39:18 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.444 07:39:18 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.444 07:39:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:33.444 ************************************ 00:03:33.444 START TEST odd_alloc 00:03:33.444 ************************************ 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.444 07:39:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:35.983 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:35.983 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:35.983 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:36.245 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:36.245 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:36.245 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:36.245 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:36.245 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:36.245 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:36.245 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:36.245 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:36.245 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:36.245 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:36.245 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:36.245 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:36.245 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:36.245 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:36.245 07:39:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:36.245 07:39:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:36.245 07:39:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.245 07:39:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.245 07:39:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:36.245 07:39:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:36.245 07:39:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:36.245 07:39:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.245 07:39:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.245 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.245 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175583644 kB' 'MemAvailable: 178451832 kB' 'Buffers: 4928 kB' 'Cached: 10159652 kB' 'SwapCached: 0 kB' 'Active: 7181364 kB' 'Inactive: 3508388 kB' 'Active(anon): 6789356 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528560 kB' 'Mapped: 218488 kB' 'Shmem: 6264184 kB' 'KReclaimable: 224488 kB' 'Slab: 771044 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546556 kB' 'KernelStack: 20432 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8287280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314888 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.246 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175583316 kB' 'MemAvailable: 178451504 kB' 'Buffers: 4928 kB' 'Cached: 10159656 kB' 'SwapCached: 0 kB' 'Active: 7182452 kB' 'Inactive: 3508388 kB' 'Active(anon): 6790444 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529680 kB' 'Mapped: 218988 kB' 'Shmem: 6264188 kB' 'KReclaimable: 224488 kB' 'Slab: 771056 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546568 kB' 'KernelStack: 20400 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8289448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314856 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.247 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.248 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175577016 kB' 'MemAvailable: 178445204 kB' 'Buffers: 4928 kB' 'Cached: 10159660 kB' 'SwapCached: 0 kB' 'Active: 7187124 kB' 'Inactive: 3508388 kB' 'Active(anon): 6795116 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534472 kB' 'Mapped: 218988 kB' 'Shmem: 6264192 kB' 'KReclaimable: 224488 kB' 'Slab: 771056 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546568 kB' 'KernelStack: 20448 kB' 'PageTables: 8704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8293440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314812 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.249 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.250 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.251 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.251 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.251 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.251 07:39:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:36.251 07:39:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:36.251 nr_hugepages=1025 00:03:36.251 07:39:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.251 resv_hugepages=0 00:03:36.251 07:39:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.251 surplus_hugepages=0 00:03:36.251 07:39:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.251 anon_hugepages=0 00:03:36.251 07:39:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:36.251 07:39:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:36.251 07:39:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.251 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.251 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:36.251 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.251 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.251 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.251 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.251 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.251 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.251 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.513 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.513 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.513 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175579764 kB' 'MemAvailable: 178447952 kB' 'Buffers: 4928 kB' 'Cached: 10159672 kB' 'SwapCached: 0 kB' 'Active: 7186636 kB' 'Inactive: 3508388 kB' 'Active(anon): 6794628 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533908 kB' 'Mapped: 219320 kB' 'Shmem: 6264204 kB' 'KReclaimable: 224488 kB' 'Slab: 771056 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546568 kB' 'KernelStack: 20416 kB' 'PageTables: 8620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8293460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314812 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:36.513 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.513 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.513 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.513 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.513 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.513 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.513 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.513 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.513 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.513 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.513 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.513 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.513 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.513 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.513 07:39:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.513 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.513 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.513 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.513 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.513 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.513 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.514 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86970408 kB' 'MemUsed: 10692276 kB' 'SwapCached: 0 kB' 'Active: 4789480 kB' 'Inactive: 3336368 kB' 'Active(anon): 4631940 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7946584 kB' 'Mapped: 71252 kB' 'AnonPages: 182572 kB' 'Shmem: 4452676 kB' 'KernelStack: 10296 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121872 kB' 'Slab: 382132 kB' 'SReclaimable: 121872 kB' 'SUnreclaim: 260260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.515 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88612588 kB' 'MemUsed: 5105880 kB' 'SwapCached: 0 kB' 'Active: 2391184 kB' 'Inactive: 172020 kB' 'Active(anon): 2156716 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 172020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2218076 kB' 'Mapped: 147232 kB' 'AnonPages: 345324 kB' 'Shmem: 1811588 kB' 'KernelStack: 10104 kB' 'PageTables: 4776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102616 kB' 'Slab: 388924 kB' 'SReclaimable: 102616 kB' 'SUnreclaim: 286308 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.516 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.517 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.518 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.518 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.518 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.518 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:36.518 node0=512 expecting 513 00:03:36.518 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.518 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.518 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.518 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:36.518 node1=513 expecting 512 00:03:36.518 07:39:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:36.518 00:03:36.518 real 0m3.025s 00:03:36.518 user 0m1.248s 00:03:36.518 sys 0m1.844s 00:03:36.518 07:39:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.518 07:39:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:36.518 ************************************ 00:03:36.518 END TEST odd_alloc 00:03:36.518 ************************************ 00:03:36.518 07:39:21 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:36.518 07:39:21 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:36.518 07:39:21 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.518 07:39:21 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.518 07:39:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:36.518 ************************************ 00:03:36.518 START TEST custom_alloc 00:03:36.518 ************************************ 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.518 07:39:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:39.818 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:39.818 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:39.818 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:39.818 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:39.818 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:39.818 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:39.818 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:39.818 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:39.818 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:39.818 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:39.818 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:39.818 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:39.818 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:39.818 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:39.818 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:39.818 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:39.818 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:39.818 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:39.818 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:39.818 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:39.818 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:39.818 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:39.818 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:39.818 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:39.818 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:39.818 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:39.818 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:39.818 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:39.818 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:39.818 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:39.818 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.818 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.818 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.818 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.818 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.818 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.818 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174545432 kB' 'MemAvailable: 177413620 kB' 'Buffers: 4928 kB' 'Cached: 10159808 kB' 'SwapCached: 0 kB' 'Active: 7182380 kB' 'Inactive: 3508388 kB' 'Active(anon): 6790372 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529336 kB' 'Mapped: 218536 kB' 'Shmem: 6264340 kB' 'KReclaimable: 224488 kB' 'Slab: 770580 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546092 kB' 'KernelStack: 20464 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8287820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314968 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174545752 kB' 'MemAvailable: 177413940 kB' 'Buffers: 4928 kB' 'Cached: 10159812 kB' 'SwapCached: 0 kB' 'Active: 7181556 kB' 'Inactive: 3508388 kB' 'Active(anon): 6789548 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528500 kB' 'Mapped: 218500 kB' 'Shmem: 6264344 kB' 'KReclaimable: 224488 kB' 'Slab: 770616 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546128 kB' 'KernelStack: 20368 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8287840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314904 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.820 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174546012 kB' 'MemAvailable: 177414200 kB' 'Buffers: 4928 kB' 'Cached: 10159828 kB' 'SwapCached: 0 kB' 'Active: 7181808 kB' 'Inactive: 3508388 kB' 'Active(anon): 6789800 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528748 kB' 'Mapped: 218500 kB' 'Shmem: 6264360 kB' 'KReclaimable: 224488 kB' 'Slab: 770616 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546128 kB' 'KernelStack: 20416 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8287860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314904 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.824 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:39.825 nr_hugepages=1536 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:39.825 resv_hugepages=0 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:39.825 surplus_hugepages=0 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:39.825 anon_hugepages=0 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174545512 kB' 'MemAvailable: 177413700 kB' 'Buffers: 4928 kB' 'Cached: 10159828 kB' 'SwapCached: 0 kB' 'Active: 7182080 kB' 'Inactive: 3508388 kB' 'Active(anon): 6790072 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529068 kB' 'Mapped: 218500 kB' 'Shmem: 6264360 kB' 'KReclaimable: 224488 kB' 'Slab: 770616 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546128 kB' 'KernelStack: 20432 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8289004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314920 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.825 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.826 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86975144 kB' 'MemUsed: 10687540 kB' 'SwapCached: 0 kB' 'Active: 4791788 kB' 'Inactive: 3336368 kB' 'Active(anon): 4634248 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7946716 kB' 'Mapped: 71260 kB' 'AnonPages: 184720 kB' 'Shmem: 4452808 kB' 'KernelStack: 10328 kB' 'PageTables: 3956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121872 kB' 'Slab: 381672 kB' 'SReclaimable: 121872 kB' 'SUnreclaim: 259800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.827 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.828 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 87569996 kB' 'MemUsed: 6148472 kB' 'SwapCached: 0 kB' 'Active: 2390776 kB' 'Inactive: 172020 kB' 'Active(anon): 2156308 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 172020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2218084 kB' 'Mapped: 147752 kB' 'AnonPages: 344820 kB' 'Shmem: 1811596 kB' 'KernelStack: 10232 kB' 'PageTables: 4780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102616 kB' 'Slab: 388944 kB' 'SReclaimable: 102616 kB' 'SUnreclaim: 286328 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.829 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:39.830 node0=512 expecting 512 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:39.830 node1=1024 expecting 1024 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:39.830 00:03:39.830 real 0m3.059s 00:03:39.830 user 0m1.206s 00:03:39.830 sys 0m1.921s 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.830 07:39:24 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:39.830 ************************************ 00:03:39.830 END TEST custom_alloc 00:03:39.830 ************************************ 00:03:39.830 07:39:24 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:39.831 07:39:24 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:39.831 07:39:24 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.831 07:39:24 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.831 07:39:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:39.831 ************************************ 00:03:39.831 START TEST no_shrink_alloc 00:03:39.831 ************************************ 00:03:39.831 07:39:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:39.831 07:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:39.831 07:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:39.831 07:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:39.831 07:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:39.831 07:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:39.831 07:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:39.831 07:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:39.831 07:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:39.831 07:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:39.831 07:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:39.831 07:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:39.831 07:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:39.831 07:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:39.831 07:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:39.831 07:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:39.831 07:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:39.831 07:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:39.831 07:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:39.831 07:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:39.831 07:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:39.831 07:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.831 07:39:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:42.371 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:42.371 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:42.371 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:42.371 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:42.371 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:42.371 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:42.371 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:42.371 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:42.371 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:42.371 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:42.371 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:42.371 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:42.371 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:42.371 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:42.371 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:42.371 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:42.371 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175600564 kB' 'MemAvailable: 178468752 kB' 'Buffers: 4928 kB' 'Cached: 10159968 kB' 'SwapCached: 0 kB' 'Active: 7183696 kB' 'Inactive: 3508388 kB' 'Active(anon): 6791688 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529848 kB' 'Mapped: 218640 kB' 'Shmem: 6264500 kB' 'KReclaimable: 224488 kB' 'Slab: 770756 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546268 kB' 'KernelStack: 20480 kB' 'PageTables: 8688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8291120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315096 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.371 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:42.372 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175601084 kB' 'MemAvailable: 178469272 kB' 'Buffers: 4928 kB' 'Cached: 10159972 kB' 'SwapCached: 0 kB' 'Active: 7183516 kB' 'Inactive: 3508388 kB' 'Active(anon): 6791508 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529704 kB' 'Mapped: 218608 kB' 'Shmem: 6264504 kB' 'KReclaimable: 224488 kB' 'Slab: 770704 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546216 kB' 'KernelStack: 20480 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8291136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314952 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.637 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.638 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175603068 kB' 'MemAvailable: 178471256 kB' 'Buffers: 4928 kB' 'Cached: 10159988 kB' 'SwapCached: 0 kB' 'Active: 7183052 kB' 'Inactive: 3508388 kB' 'Active(anon): 6791044 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529812 kB' 'Mapped: 218524 kB' 'Shmem: 6264520 kB' 'KReclaimable: 224488 kB' 'Slab: 770712 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546224 kB' 'KernelStack: 20512 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8289668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315016 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.639 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.640 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:42.641 nr_hugepages=1024 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:42.641 resv_hugepages=0 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:42.641 surplus_hugepages=0 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:42.641 anon_hugepages=0 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.641 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175603460 kB' 'MemAvailable: 178471648 kB' 'Buffers: 4928 kB' 'Cached: 10160012 kB' 'SwapCached: 0 kB' 'Active: 7183312 kB' 'Inactive: 3508388 kB' 'Active(anon): 6791304 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529912 kB' 'Mapped: 218524 kB' 'Shmem: 6264544 kB' 'KReclaimable: 224488 kB' 'Slab: 770680 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546192 kB' 'KernelStack: 20560 kB' 'PageTables: 9188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8291184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315016 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.642 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.643 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85928200 kB' 'MemUsed: 11734484 kB' 'SwapCached: 0 kB' 'Active: 4792248 kB' 'Inactive: 3336368 kB' 'Active(anon): 4634708 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7946872 kB' 'Mapped: 71268 kB' 'AnonPages: 184864 kB' 'Shmem: 4452964 kB' 'KernelStack: 10312 kB' 'PageTables: 3864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121872 kB' 'Slab: 381740 kB' 'SReclaimable: 121872 kB' 'SUnreclaim: 259868 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.644 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:42.645 node0=1024 expecting 1024 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:42.645 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:42.646 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:42.646 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.646 07:39:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:45.183 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:45.183 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:45.183 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:45.183 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:45.183 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:45.183 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:45.183 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:45.183 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:45.183 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:45.183 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:45.446 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:45.446 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:45.446 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:45.446 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:45.446 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:45.446 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:45.446 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:45.446 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175598720 kB' 'MemAvailable: 178466908 kB' 'Buffers: 4928 kB' 'Cached: 10160088 kB' 'SwapCached: 0 kB' 'Active: 7185800 kB' 'Inactive: 3508388 kB' 'Active(anon): 6793792 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532076 kB' 'Mapped: 218612 kB' 'Shmem: 6264620 kB' 'KReclaimable: 224488 kB' 'Slab: 770996 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546508 kB' 'KernelStack: 20816 kB' 'PageTables: 10188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8289848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315096 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.446 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175599732 kB' 'MemAvailable: 178467920 kB' 'Buffers: 4928 kB' 'Cached: 10160092 kB' 'SwapCached: 0 kB' 'Active: 7184200 kB' 'Inactive: 3508388 kB' 'Active(anon): 6792192 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531224 kB' 'Mapped: 218524 kB' 'Shmem: 6264624 kB' 'KReclaimable: 224488 kB' 'Slab: 771040 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546552 kB' 'KernelStack: 20416 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8288740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314968 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.447 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175608240 kB' 'MemAvailable: 178476428 kB' 'Buffers: 4928 kB' 'Cached: 10160108 kB' 'SwapCached: 0 kB' 'Active: 7185252 kB' 'Inactive: 3508388 kB' 'Active(anon): 6793244 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532284 kB' 'Mapped: 218524 kB' 'Shmem: 6264640 kB' 'KReclaimable: 224488 kB' 'Slab: 771024 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546536 kB' 'KernelStack: 20368 kB' 'PageTables: 8440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8301848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314920 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.448 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:45.449 nr_hugepages=1024 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.449 resv_hugepages=0 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.449 surplus_hugepages=0 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.449 anon_hugepages=0 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175609788 kB' 'MemAvailable: 178477976 kB' 'Buffers: 4928 kB' 'Cached: 10160128 kB' 'SwapCached: 0 kB' 'Active: 7183876 kB' 'Inactive: 3508388 kB' 'Active(anon): 6791868 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3508388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530920 kB' 'Mapped: 218524 kB' 'Shmem: 6264660 kB' 'KReclaimable: 224488 kB' 'Slab: 771024 kB' 'SReclaimable: 224488 kB' 'SUnreclaim: 546536 kB' 'KernelStack: 20384 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8288420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314904 kB' 'VmallocChunk: 0 kB' 'Percpu: 73344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2649044 kB' 'DirectMap2M: 14856192 kB' 'DirectMap1G: 184549376 kB' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.449 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.450 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85939924 kB' 'MemUsed: 11722760 kB' 'SwapCached: 0 kB' 'Active: 4791548 kB' 'Inactive: 3336368 kB' 'Active(anon): 4634008 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336368 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7946912 kB' 'Mapped: 71252 kB' 'AnonPages: 184416 kB' 'Shmem: 4453004 kB' 'KernelStack: 10296 kB' 'PageTables: 3812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121872 kB' 'Slab: 381808 kB' 'SReclaimable: 121872 kB' 'SUnreclaim: 259936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:45.712 node0=1024 expecting 1024 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:45.712 00:03:45.712 real 0m5.971s 00:03:45.712 user 0m2.446s 00:03:45.712 sys 0m3.657s 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.712 07:39:30 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:45.712 ************************************ 00:03:45.712 END TEST no_shrink_alloc 00:03:45.712 ************************************ 00:03:45.712 07:39:30 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:45.712 07:39:30 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:45.712 07:39:30 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:45.712 07:39:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:45.712 07:39:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.712 07:39:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:45.712 07:39:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.712 07:39:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:45.712 07:39:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:45.712 07:39:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.712 07:39:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:45.712 07:39:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.712 07:39:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:45.712 07:39:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:45.712 07:39:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:45.712 00:03:45.712 real 0m22.641s 00:03:45.712 user 0m8.860s 00:03:45.712 sys 0m13.504s 00:03:45.712 07:39:30 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.712 07:39:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.712 ************************************ 00:03:45.712 END TEST hugepages 00:03:45.713 ************************************ 00:03:45.713 07:39:30 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:45.713 07:39:30 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:45.713 07:39:30 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.713 07:39:30 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.713 07:39:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:45.713 ************************************ 00:03:45.713 START TEST driver 00:03:45.713 ************************************ 00:03:45.713 07:39:30 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:45.713 * Looking for test storage... 00:03:45.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:45.713 07:39:30 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:45.713 07:39:30 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:45.713 07:39:30 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:49.949 07:39:34 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:49.949 07:39:34 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.949 07:39:34 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.949 07:39:34 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:49.949 ************************************ 00:03:49.949 START TEST guess_driver 00:03:49.949 ************************************ 00:03:49.949 07:39:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:49.949 07:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:49.949 07:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:49.949 07:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:49.949 07:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:49.949 07:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:49.949 07:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:49.949 07:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:49.949 07:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:49.949 07:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:49.949 07:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:49.949 07:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:49.949 07:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:49.949 07:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:49.949 07:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:49.949 07:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:49.949 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:49.949 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:49.949 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:49.949 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:49.949 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:49.949 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:49.949 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:49.949 07:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:49.949 07:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:49.949 07:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:49.949 07:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:49.949 07:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:49.949 Looking for driver=vfio-pci 00:03:49.949 07:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.949 07:39:34 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:49.949 07:39:34 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.949 07:39:34 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.238 07:39:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.806 07:39:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.806 07:39:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.806 07:39:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.806 07:39:38 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:53.806 07:39:38 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:53.806 07:39:38 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.806 07:39:38 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.002 00:03:58.002 real 0m7.989s 00:03:58.002 user 0m2.321s 00:03:58.002 sys 0m4.093s 00:03:58.002 07:39:42 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.002 07:39:42 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:58.002 ************************************ 00:03:58.002 END TEST guess_driver 00:03:58.002 ************************************ 00:03:58.002 07:39:42 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:58.002 00:03:58.002 real 0m12.232s 00:03:58.002 user 0m3.509s 00:03:58.002 sys 0m6.346s 00:03:58.002 07:39:42 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.002 07:39:42 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:58.002 ************************************ 00:03:58.002 END TEST driver 00:03:58.002 ************************************ 00:03:58.002 07:39:42 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:58.003 07:39:42 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:58.003 07:39:42 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.003 07:39:42 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.003 07:39:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:58.003 ************************************ 00:03:58.003 START TEST devices 00:03:58.003 ************************************ 00:03:58.003 07:39:42 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:58.003 * Looking for test storage... 00:03:58.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:58.003 07:39:42 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:58.003 07:39:42 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:58.003 07:39:42 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:58.003 07:39:42 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:01.295 07:39:45 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:01.295 07:39:45 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:01.295 07:39:45 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:01.295 07:39:45 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:01.295 07:39:45 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:01.295 07:39:45 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:01.295 07:39:45 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:01.295 07:39:45 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:01.295 07:39:45 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:01.295 07:39:45 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:01.295 07:39:45 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:01.295 07:39:45 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:01.295 07:39:45 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:01.295 07:39:45 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:01.295 07:39:45 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:01.295 07:39:45 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:01.295 07:39:45 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:01.295 07:39:45 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:04:01.295 07:39:45 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:01.295 07:39:45 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:01.295 07:39:45 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:01.295 07:39:45 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:01.295 No valid GPT data, bailing 00:04:01.295 07:39:45 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:01.295 07:39:45 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:01.295 07:39:45 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:01.295 07:39:45 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:01.295 07:39:45 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:01.295 07:39:45 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:01.295 07:39:45 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:01.295 07:39:45 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:01.295 07:39:45 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:01.295 07:39:45 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:04:01.295 07:39:45 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:01.295 07:39:45 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:01.295 07:39:45 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:01.295 07:39:45 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.295 07:39:45 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.295 07:39:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:01.295 ************************************ 00:04:01.295 START TEST nvme_mount 00:04:01.295 ************************************ 00:04:01.295 07:39:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:01.295 07:39:46 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:01.295 07:39:46 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:01.295 07:39:46 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.295 07:39:46 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:01.295 07:39:46 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:01.295 07:39:46 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:01.296 07:39:46 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:01.296 07:39:46 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:01.296 07:39:46 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:01.296 07:39:46 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:01.296 07:39:46 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:01.296 07:39:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:01.296 07:39:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.296 07:39:46 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:01.296 07:39:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:01.296 07:39:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.296 07:39:46 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:01.296 07:39:46 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:01.296 07:39:46 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:02.676 Creating new GPT entries in memory. 00:04:02.676 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:02.676 other utilities. 00:04:02.676 07:39:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:02.676 07:39:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:02.676 07:39:47 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:02.676 07:39:47 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:02.676 07:39:47 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:03.615 Creating new GPT entries in memory. 00:04:03.615 The operation has completed successfully. 00:04:03.615 07:39:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:03.615 07:39:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:03.615 07:39:48 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3053124 00:04:03.615 07:39:48 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:03.615 07:39:48 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:03.615 07:39:48 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:03.615 07:39:48 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:03.615 07:39:48 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:03.615 07:39:48 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:03.615 07:39:48 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:03.615 07:39:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:03.615 07:39:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:03.615 07:39:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:03.615 07:39:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:03.615 07:39:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:03.615 07:39:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:03.615 07:39:48 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:03.615 07:39:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:03.615 07:39:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.615 07:39:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:03.615 07:39:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:03.615 07:39:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.615 07:39:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.153 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.413 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:06.413 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:06.413 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.413 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:06.413 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:06.413 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:06.413 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.413 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.413 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.413 07:39:50 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:06.413 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:06.413 07:39:51 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:06.413 07:39:51 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:06.672 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:06.672 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:06.672 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:06.672 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:06.672 07:39:51 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:06.672 07:39:51 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:06.672 07:39:51 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.672 07:39:51 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:06.672 07:39:51 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:06.672 07:39:51 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.673 07:39:51 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:06.673 07:39:51 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:06.673 07:39:51 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:06.673 07:39:51 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.673 07:39:51 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:06.673 07:39:51 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:06.673 07:39:51 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:06.673 07:39:51 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:06.673 07:39:51 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:06.673 07:39:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.673 07:39:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:06.673 07:39:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:06.673 07:39:51 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.673 07:39:51 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:09.211 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.211 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:09.211 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:09.211 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.211 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.211 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.211 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.211 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.211 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.211 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.211 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.211 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.211 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.211 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.211 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.211 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.211 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.211 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.211 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.211 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.211 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.211 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.211 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.211 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.212 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.212 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.212 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.212 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.212 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.212 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.212 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.212 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.212 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.212 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.212 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.212 07:39:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.470 07:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:09.470 07:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:09.470 07:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.470 07:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:09.470 07:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:09.470 07:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.470 07:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:09.470 07:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:09.470 07:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:09.470 07:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:09.470 07:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:09.470 07:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:09.470 07:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:09.471 07:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:09.471 07:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.471 07:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:09.471 07:39:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:09.471 07:39:54 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.471 07:39:54 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.761 07:39:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.761 07:39:57 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:12.761 07:39:57 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:12.761 07:39:57 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:12.761 07:39:57 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:12.761 07:39:57 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.761 07:39:57 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:12.761 07:39:57 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:12.761 07:39:57 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:12.761 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:12.761 00:04:12.761 real 0m11.015s 00:04:12.761 user 0m3.200s 00:04:12.761 sys 0m5.648s 00:04:12.761 07:39:57 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.761 07:39:57 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:12.761 ************************************ 00:04:12.761 END TEST nvme_mount 00:04:12.761 ************************************ 00:04:12.761 07:39:57 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:12.761 07:39:57 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:12.761 07:39:57 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.761 07:39:57 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.761 07:39:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:12.761 ************************************ 00:04:12.761 START TEST dm_mount 00:04:12.761 ************************************ 00:04:12.761 07:39:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:12.761 07:39:57 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:12.761 07:39:57 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:12.761 07:39:57 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:12.761 07:39:57 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:12.761 07:39:57 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:12.761 07:39:57 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:12.761 07:39:57 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:12.761 07:39:57 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:12.761 07:39:57 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:12.761 07:39:57 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:12.761 07:39:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:12.761 07:39:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:12.761 07:39:57 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:12.761 07:39:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:12.761 07:39:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:12.761 07:39:57 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:12.761 07:39:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:12.761 07:39:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:12.761 07:39:57 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:12.761 07:39:57 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:12.761 07:39:57 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:13.697 Creating new GPT entries in memory. 00:04:13.697 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:13.697 other utilities. 00:04:13.697 07:39:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:13.697 07:39:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:13.697 07:39:58 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:13.697 07:39:58 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:13.697 07:39:58 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:14.635 Creating new GPT entries in memory. 00:04:14.635 The operation has completed successfully. 00:04:14.635 07:39:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:14.635 07:39:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:14.635 07:39:59 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:14.635 07:39:59 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:14.635 07:39:59 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:15.576 The operation has completed successfully. 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3057316 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.576 07:40:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.901 07:40:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.901 07:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:18.901 07:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:18.901 07:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:18.901 07:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:18.901 07:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:18.901 07:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:18.901 07:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:18.901 07:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:18.901 07:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:18.901 07:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:18.901 07:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:18.901 07:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:18.901 07:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:18.901 07:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:18.901 07:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.901 07:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:18.901 07:40:03 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:18.901 07:40:03 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.901 07:40:03 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.437 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.438 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.438 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.438 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:21.438 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:21.438 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:21.438 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:21.438 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:21.438 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:21.438 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:21.438 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:21.438 07:40:05 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:21.438 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:21.438 07:40:06 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:21.438 07:40:06 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:21.438 00:04:21.438 real 0m8.923s 00:04:21.438 user 0m2.189s 00:04:21.438 sys 0m3.770s 00:04:21.438 07:40:06 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.438 07:40:06 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:21.438 ************************************ 00:04:21.438 END TEST dm_mount 00:04:21.438 ************************************ 00:04:21.438 07:40:06 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:21.438 07:40:06 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:21.438 07:40:06 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:21.438 07:40:06 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.438 07:40:06 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:21.438 07:40:06 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:21.438 07:40:06 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:21.438 07:40:06 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:21.697 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:21.697 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:21.697 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:21.697 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:21.697 07:40:06 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:21.697 07:40:06 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:21.697 07:40:06 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:21.697 07:40:06 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:21.697 07:40:06 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:21.697 07:40:06 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:21.697 07:40:06 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:21.697 00:04:21.697 real 0m23.680s 00:04:21.697 user 0m6.732s 00:04:21.697 sys 0m11.704s 00:04:21.697 07:40:06 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.697 07:40:06 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:21.697 ************************************ 00:04:21.697 END TEST devices 00:04:21.697 ************************************ 00:04:21.697 07:40:06 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:21.697 00:04:21.697 real 1m19.524s 00:04:21.697 user 0m26.096s 00:04:21.697 sys 0m44.215s 00:04:21.697 07:40:06 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.697 07:40:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:21.697 ************************************ 00:04:21.697 END TEST setup.sh 00:04:21.697 ************************************ 00:04:21.697 07:40:06 -- common/autotest_common.sh@1142 -- # return 0 00:04:21.697 07:40:06 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:24.988 Hugepages 00:04:24.988 node hugesize free / total 00:04:24.988 node0 1048576kB 0 / 0 00:04:24.988 node0 2048kB 2048 / 2048 00:04:24.988 node1 1048576kB 0 / 0 00:04:24.988 node1 2048kB 0 / 0 00:04:24.988 00:04:24.988 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:24.988 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:24.988 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:24.988 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:24.988 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:24.988 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:24.988 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:24.988 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:24.988 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:24.988 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:24.988 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:24.988 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:24.988 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:24.988 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:24.988 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:24.988 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:24.988 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:24.988 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:24.988 07:40:09 -- spdk/autotest.sh@130 -- # uname -s 00:04:24.988 07:40:09 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:24.988 07:40:09 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:24.988 07:40:09 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:27.525 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:27.525 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:27.525 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:27.525 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:27.525 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:27.525 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:27.525 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:27.525 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:27.525 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:27.525 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:27.525 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:27.525 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:27.525 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:27.525 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:27.525 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:27.525 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:28.463 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:28.463 07:40:13 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:29.398 07:40:14 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:29.398 07:40:14 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:29.398 07:40:14 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:29.398 07:40:14 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:29.398 07:40:14 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:29.398 07:40:14 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:29.398 07:40:14 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:29.398 07:40:14 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:29.398 07:40:14 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:29.656 07:40:14 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:29.656 07:40:14 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:29.656 07:40:14 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:32.189 Waiting for block devices as requested 00:04:32.189 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:32.449 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:32.449 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:32.708 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:32.708 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:32.708 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:32.708 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:32.968 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:32.968 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:32.968 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:33.227 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:33.227 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:33.227 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:33.486 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:33.486 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:33.486 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:33.486 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:33.745 07:40:18 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:33.745 07:40:18 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:33.745 07:40:18 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:33.745 07:40:18 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:04:33.745 07:40:18 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:33.745 07:40:18 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:33.745 07:40:18 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:33.745 07:40:18 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:33.745 07:40:18 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:33.745 07:40:18 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:33.745 07:40:18 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:33.745 07:40:18 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:33.745 07:40:18 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:33.745 07:40:18 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:33.745 07:40:18 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:33.745 07:40:18 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:33.745 07:40:18 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:33.745 07:40:18 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:33.745 07:40:18 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:33.745 07:40:18 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:33.745 07:40:18 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:33.745 07:40:18 -- common/autotest_common.sh@1557 -- # continue 00:04:33.745 07:40:18 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:33.745 07:40:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:33.745 07:40:18 -- common/autotest_common.sh@10 -- # set +x 00:04:33.745 07:40:18 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:33.745 07:40:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:33.745 07:40:18 -- common/autotest_common.sh@10 -- # set +x 00:04:33.745 07:40:18 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:37.032 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:37.032 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:37.032 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:37.032 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:37.032 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:37.032 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:37.032 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:37.032 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:37.032 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:37.032 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:37.032 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:37.032 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:37.032 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:37.032 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:37.032 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:37.032 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:37.616 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:37.616 07:40:22 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:37.616 07:40:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:37.616 07:40:22 -- common/autotest_common.sh@10 -- # set +x 00:04:37.616 07:40:22 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:37.616 07:40:22 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:37.616 07:40:22 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:37.616 07:40:22 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:37.616 07:40:22 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:37.616 07:40:22 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:37.616 07:40:22 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:37.616 07:40:22 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:37.616 07:40:22 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:37.616 07:40:22 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:37.616 07:40:22 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:37.616 07:40:22 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:37.616 07:40:22 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:37.616 07:40:22 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:37.616 07:40:22 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:37.616 07:40:22 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:37.616 07:40:22 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:37.616 07:40:22 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:37.616 07:40:22 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:04:37.616 07:40:22 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:04:37.616 07:40:22 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=3066122 00:04:37.616 07:40:22 -- common/autotest_common.sh@1598 -- # waitforlisten 3066122 00:04:37.616 07:40:22 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.616 07:40:22 -- common/autotest_common.sh@829 -- # '[' -z 3066122 ']' 00:04:37.616 07:40:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.616 07:40:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:37.616 07:40:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.616 07:40:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:37.616 07:40:22 -- common/autotest_common.sh@10 -- # set +x 00:04:37.616 [2024-07-15 07:40:22.355675] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:04:37.616 [2024-07-15 07:40:22.355723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3066122 ] 00:04:37.874 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.874 [2024-07-15 07:40:22.421661] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.874 [2024-07-15 07:40:22.502430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.441 07:40:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.441 07:40:23 -- common/autotest_common.sh@862 -- # return 0 00:04:38.441 07:40:23 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:38.441 07:40:23 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:38.441 07:40:23 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:41.762 nvme0n1 00:04:41.762 07:40:26 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:41.762 [2024-07-15 07:40:26.316253] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:41.762 request: 00:04:41.762 { 00:04:41.762 "nvme_ctrlr_name": "nvme0", 00:04:41.762 "password": "test", 00:04:41.762 "method": "bdev_nvme_opal_revert", 00:04:41.762 "req_id": 1 00:04:41.762 } 00:04:41.762 Got JSON-RPC error response 00:04:41.762 response: 00:04:41.762 { 00:04:41.762 "code": -32602, 00:04:41.762 "message": "Invalid parameters" 00:04:41.762 } 00:04:41.762 07:40:26 -- common/autotest_common.sh@1604 -- # true 00:04:41.762 07:40:26 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:41.762 07:40:26 -- common/autotest_common.sh@1608 -- # killprocess 3066122 00:04:41.762 07:40:26 -- common/autotest_common.sh@948 -- # '[' -z 3066122 ']' 00:04:41.762 07:40:26 -- common/autotest_common.sh@952 -- # kill -0 3066122 00:04:41.762 07:40:26 -- common/autotest_common.sh@953 -- # uname 00:04:41.762 07:40:26 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:41.762 07:40:26 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3066122 00:04:41.762 07:40:26 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:41.762 07:40:26 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:41.762 07:40:26 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3066122' 00:04:41.762 killing process with pid 3066122 00:04:41.763 07:40:26 -- common/autotest_common.sh@967 -- # kill 3066122 00:04:41.763 07:40:26 -- common/autotest_common.sh@972 -- # wait 3066122 00:04:43.670 07:40:27 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:43.670 07:40:27 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:43.670 07:40:27 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:43.670 07:40:27 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:43.670 07:40:27 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:43.670 07:40:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:43.670 07:40:27 -- common/autotest_common.sh@10 -- # set +x 00:04:43.670 07:40:27 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:43.670 07:40:27 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:43.670 07:40:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.670 07:40:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.670 07:40:27 -- common/autotest_common.sh@10 -- # set +x 00:04:43.670 ************************************ 00:04:43.670 START TEST env 00:04:43.670 ************************************ 00:04:43.670 07:40:28 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:43.670 * Looking for test storage... 00:04:43.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:43.670 07:40:28 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:43.670 07:40:28 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.670 07:40:28 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.670 07:40:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.670 ************************************ 00:04:43.670 START TEST env_memory 00:04:43.670 ************************************ 00:04:43.670 07:40:28 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:43.670 00:04:43.670 00:04:43.670 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.670 http://cunit.sourceforge.net/ 00:04:43.670 00:04:43.670 00:04:43.670 Suite: memory 00:04:43.670 Test: alloc and free memory map ...[2024-07-15 07:40:28.180432] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:43.670 passed 00:04:43.670 Test: mem map translation ...[2024-07-15 07:40:28.199616] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:43.670 [2024-07-15 07:40:28.199633] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:43.670 [2024-07-15 07:40:28.199671] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:43.670 [2024-07-15 07:40:28.199678] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:43.670 passed 00:04:43.670 Test: mem map registration ...[2024-07-15 07:40:28.238524] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:43.670 [2024-07-15 07:40:28.238542] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:43.670 passed 00:04:43.670 Test: mem map adjacent registrations ...passed 00:04:43.670 00:04:43.670 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.670 suites 1 1 n/a 0 0 00:04:43.670 tests 4 4 4 0 0 00:04:43.670 asserts 152 152 152 0 n/a 00:04:43.670 00:04:43.670 Elapsed time = 0.139 seconds 00:04:43.670 00:04:43.670 real 0m0.151s 00:04:43.670 user 0m0.141s 00:04:43.670 sys 0m0.010s 00:04:43.670 07:40:28 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.670 07:40:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:43.670 ************************************ 00:04:43.670 END TEST env_memory 00:04:43.670 ************************************ 00:04:43.670 07:40:28 env -- common/autotest_common.sh@1142 -- # return 0 00:04:43.670 07:40:28 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:43.670 07:40:28 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.670 07:40:28 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.670 07:40:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.670 ************************************ 00:04:43.670 START TEST env_vtophys 00:04:43.670 ************************************ 00:04:43.670 07:40:28 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:43.670 EAL: lib.eal log level changed from notice to debug 00:04:43.670 EAL: Detected lcore 0 as core 0 on socket 0 00:04:43.670 EAL: Detected lcore 1 as core 1 on socket 0 00:04:43.670 EAL: Detected lcore 2 as core 2 on socket 0 00:04:43.670 EAL: Detected lcore 3 as core 3 on socket 0 00:04:43.670 EAL: Detected lcore 4 as core 4 on socket 0 00:04:43.670 EAL: Detected lcore 5 as core 5 on socket 0 00:04:43.670 EAL: Detected lcore 6 as core 6 on socket 0 00:04:43.670 EAL: Detected lcore 7 as core 8 on socket 0 00:04:43.670 EAL: Detected lcore 8 as core 9 on socket 0 00:04:43.670 EAL: Detected lcore 9 as core 10 on socket 0 00:04:43.670 EAL: Detected lcore 10 as core 11 on socket 0 00:04:43.670 EAL: Detected lcore 11 as core 12 on socket 0 00:04:43.670 EAL: Detected lcore 12 as core 13 on socket 0 00:04:43.670 EAL: Detected lcore 13 as core 16 on socket 0 00:04:43.670 EAL: Detected lcore 14 as core 17 on socket 0 00:04:43.670 EAL: Detected lcore 15 as core 18 on socket 0 00:04:43.670 EAL: Detected lcore 16 as core 19 on socket 0 00:04:43.670 EAL: Detected lcore 17 as core 20 on socket 0 00:04:43.670 EAL: Detected lcore 18 as core 21 on socket 0 00:04:43.670 EAL: Detected lcore 19 as core 25 on socket 0 00:04:43.670 EAL: Detected lcore 20 as core 26 on socket 0 00:04:43.670 EAL: Detected lcore 21 as core 27 on socket 0 00:04:43.670 EAL: Detected lcore 22 as core 28 on socket 0 00:04:43.670 EAL: Detected lcore 23 as core 29 on socket 0 00:04:43.670 EAL: Detected lcore 24 as core 0 on socket 1 00:04:43.670 EAL: Detected lcore 25 as core 1 on socket 1 00:04:43.670 EAL: Detected lcore 26 as core 2 on socket 1 00:04:43.670 EAL: Detected lcore 27 as core 3 on socket 1 00:04:43.670 EAL: Detected lcore 28 as core 4 on socket 1 00:04:43.670 EAL: Detected lcore 29 as core 5 on socket 1 00:04:43.670 EAL: Detected lcore 30 as core 6 on socket 1 00:04:43.670 EAL: Detected lcore 31 as core 9 on socket 1 00:04:43.670 EAL: Detected lcore 32 as core 10 on socket 1 00:04:43.670 EAL: Detected lcore 33 as core 11 on socket 1 00:04:43.670 EAL: Detected lcore 34 as core 12 on socket 1 00:04:43.670 EAL: Detected lcore 35 as core 13 on socket 1 00:04:43.670 EAL: Detected lcore 36 as core 16 on socket 1 00:04:43.670 EAL: Detected lcore 37 as core 17 on socket 1 00:04:43.670 EAL: Detected lcore 38 as core 18 on socket 1 00:04:43.670 EAL: Detected lcore 39 as core 19 on socket 1 00:04:43.670 EAL: Detected lcore 40 as core 20 on socket 1 00:04:43.670 EAL: Detected lcore 41 as core 21 on socket 1 00:04:43.670 EAL: Detected lcore 42 as core 24 on socket 1 00:04:43.670 EAL: Detected lcore 43 as core 25 on socket 1 00:04:43.670 EAL: Detected lcore 44 as core 26 on socket 1 00:04:43.670 EAL: Detected lcore 45 as core 27 on socket 1 00:04:43.670 EAL: Detected lcore 46 as core 28 on socket 1 00:04:43.670 EAL: Detected lcore 47 as core 29 on socket 1 00:04:43.670 EAL: Detected lcore 48 as core 0 on socket 0 00:04:43.670 EAL: Detected lcore 49 as core 1 on socket 0 00:04:43.670 EAL: Detected lcore 50 as core 2 on socket 0 00:04:43.670 EAL: Detected lcore 51 as core 3 on socket 0 00:04:43.670 EAL: Detected lcore 52 as core 4 on socket 0 00:04:43.670 EAL: Detected lcore 53 as core 5 on socket 0 00:04:43.670 EAL: Detected lcore 54 as core 6 on socket 0 00:04:43.670 EAL: Detected lcore 55 as core 8 on socket 0 00:04:43.670 EAL: Detected lcore 56 as core 9 on socket 0 00:04:43.670 EAL: Detected lcore 57 as core 10 on socket 0 00:04:43.670 EAL: Detected lcore 58 as core 11 on socket 0 00:04:43.670 EAL: Detected lcore 59 as core 12 on socket 0 00:04:43.670 EAL: Detected lcore 60 as core 13 on socket 0 00:04:43.670 EAL: Detected lcore 61 as core 16 on socket 0 00:04:43.670 EAL: Detected lcore 62 as core 17 on socket 0 00:04:43.670 EAL: Detected lcore 63 as core 18 on socket 0 00:04:43.671 EAL: Detected lcore 64 as core 19 on socket 0 00:04:43.671 EAL: Detected lcore 65 as core 20 on socket 0 00:04:43.671 EAL: Detected lcore 66 as core 21 on socket 0 00:04:43.671 EAL: Detected lcore 67 as core 25 on socket 0 00:04:43.671 EAL: Detected lcore 68 as core 26 on socket 0 00:04:43.671 EAL: Detected lcore 69 as core 27 on socket 0 00:04:43.671 EAL: Detected lcore 70 as core 28 on socket 0 00:04:43.671 EAL: Detected lcore 71 as core 29 on socket 0 00:04:43.671 EAL: Detected lcore 72 as core 0 on socket 1 00:04:43.671 EAL: Detected lcore 73 as core 1 on socket 1 00:04:43.671 EAL: Detected lcore 74 as core 2 on socket 1 00:04:43.671 EAL: Detected lcore 75 as core 3 on socket 1 00:04:43.671 EAL: Detected lcore 76 as core 4 on socket 1 00:04:43.671 EAL: Detected lcore 77 as core 5 on socket 1 00:04:43.671 EAL: Detected lcore 78 as core 6 on socket 1 00:04:43.671 EAL: Detected lcore 79 as core 9 on socket 1 00:04:43.671 EAL: Detected lcore 80 as core 10 on socket 1 00:04:43.671 EAL: Detected lcore 81 as core 11 on socket 1 00:04:43.671 EAL: Detected lcore 82 as core 12 on socket 1 00:04:43.671 EAL: Detected lcore 83 as core 13 on socket 1 00:04:43.671 EAL: Detected lcore 84 as core 16 on socket 1 00:04:43.671 EAL: Detected lcore 85 as core 17 on socket 1 00:04:43.671 EAL: Detected lcore 86 as core 18 on socket 1 00:04:43.671 EAL: Detected lcore 87 as core 19 on socket 1 00:04:43.671 EAL: Detected lcore 88 as core 20 on socket 1 00:04:43.671 EAL: Detected lcore 89 as core 21 on socket 1 00:04:43.671 EAL: Detected lcore 90 as core 24 on socket 1 00:04:43.671 EAL: Detected lcore 91 as core 25 on socket 1 00:04:43.671 EAL: Detected lcore 92 as core 26 on socket 1 00:04:43.671 EAL: Detected lcore 93 as core 27 on socket 1 00:04:43.671 EAL: Detected lcore 94 as core 28 on socket 1 00:04:43.671 EAL: Detected lcore 95 as core 29 on socket 1 00:04:43.671 EAL: Maximum logical cores by configuration: 128 00:04:43.671 EAL: Detected CPU lcores: 96 00:04:43.671 EAL: Detected NUMA nodes: 2 00:04:43.671 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:43.671 EAL: Detected shared linkage of DPDK 00:04:43.671 EAL: No shared files mode enabled, IPC will be disabled 00:04:43.671 EAL: Bus pci wants IOVA as 'DC' 00:04:43.671 EAL: Buses did not request a specific IOVA mode. 00:04:43.671 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:43.671 EAL: Selected IOVA mode 'VA' 00:04:43.671 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.671 EAL: Probing VFIO support... 00:04:43.671 EAL: IOMMU type 1 (Type 1) is supported 00:04:43.671 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:43.671 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:43.671 EAL: VFIO support initialized 00:04:43.671 EAL: Ask a virtual area of 0x2e000 bytes 00:04:43.671 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:43.671 EAL: Setting up physically contiguous memory... 00:04:43.671 EAL: Setting maximum number of open files to 524288 00:04:43.671 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:43.671 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:43.671 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:43.671 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.671 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:43.671 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.671 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.671 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:43.671 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:43.671 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.671 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:43.671 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.671 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.671 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:43.671 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:43.671 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.671 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:43.671 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.671 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.671 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:43.671 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:43.671 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.671 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:43.671 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.671 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.671 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:43.671 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:43.671 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:43.671 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.671 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:43.671 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:43.671 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.671 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:43.671 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:43.671 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.671 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:43.671 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:43.671 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.671 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:43.671 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:43.671 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.671 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:43.671 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:43.671 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.671 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:43.671 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:43.671 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.671 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:43.671 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:43.671 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.671 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:43.671 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:43.671 EAL: Hugepages will be freed exactly as allocated. 00:04:43.671 EAL: No shared files mode enabled, IPC is disabled 00:04:43.671 EAL: No shared files mode enabled, IPC is disabled 00:04:43.671 EAL: TSC frequency is ~2300000 KHz 00:04:43.671 EAL: Main lcore 0 is ready (tid=7fc105041a00;cpuset=[0]) 00:04:43.671 EAL: Trying to obtain current memory policy. 00:04:43.671 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.671 EAL: Restoring previous memory policy: 0 00:04:43.671 EAL: request: mp_malloc_sync 00:04:43.671 EAL: No shared files mode enabled, IPC is disabled 00:04:43.671 EAL: Heap on socket 0 was expanded by 2MB 00:04:43.671 EAL: No shared files mode enabled, IPC is disabled 00:04:43.931 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:43.931 EAL: Mem event callback 'spdk:(nil)' registered 00:04:43.931 00:04:43.931 00:04:43.931 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.931 http://cunit.sourceforge.net/ 00:04:43.931 00:04:43.931 00:04:43.931 Suite: components_suite 00:04:43.931 Test: vtophys_malloc_test ...passed 00:04:43.931 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:43.931 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.931 EAL: Restoring previous memory policy: 4 00:04:43.931 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.931 EAL: request: mp_malloc_sync 00:04:43.931 EAL: No shared files mode enabled, IPC is disabled 00:04:43.931 EAL: Heap on socket 0 was expanded by 4MB 00:04:43.931 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.931 EAL: request: mp_malloc_sync 00:04:43.931 EAL: No shared files mode enabled, IPC is disabled 00:04:43.931 EAL: Heap on socket 0 was shrunk by 4MB 00:04:43.931 EAL: Trying to obtain current memory policy. 00:04:43.931 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.931 EAL: Restoring previous memory policy: 4 00:04:43.931 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.931 EAL: request: mp_malloc_sync 00:04:43.931 EAL: No shared files mode enabled, IPC is disabled 00:04:43.931 EAL: Heap on socket 0 was expanded by 6MB 00:04:43.931 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.931 EAL: request: mp_malloc_sync 00:04:43.931 EAL: No shared files mode enabled, IPC is disabled 00:04:43.931 EAL: Heap on socket 0 was shrunk by 6MB 00:04:43.931 EAL: Trying to obtain current memory policy. 00:04:43.931 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.931 EAL: Restoring previous memory policy: 4 00:04:43.931 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.931 EAL: request: mp_malloc_sync 00:04:43.931 EAL: No shared files mode enabled, IPC is disabled 00:04:43.931 EAL: Heap on socket 0 was expanded by 10MB 00:04:43.931 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.931 EAL: request: mp_malloc_sync 00:04:43.931 EAL: No shared files mode enabled, IPC is disabled 00:04:43.931 EAL: Heap on socket 0 was shrunk by 10MB 00:04:43.931 EAL: Trying to obtain current memory policy. 00:04:43.931 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.931 EAL: Restoring previous memory policy: 4 00:04:43.931 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.931 EAL: request: mp_malloc_sync 00:04:43.931 EAL: No shared files mode enabled, IPC is disabled 00:04:43.931 EAL: Heap on socket 0 was expanded by 18MB 00:04:43.931 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.931 EAL: request: mp_malloc_sync 00:04:43.931 EAL: No shared files mode enabled, IPC is disabled 00:04:43.931 EAL: Heap on socket 0 was shrunk by 18MB 00:04:43.931 EAL: Trying to obtain current memory policy. 00:04:43.931 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.931 EAL: Restoring previous memory policy: 4 00:04:43.932 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.932 EAL: request: mp_malloc_sync 00:04:43.932 EAL: No shared files mode enabled, IPC is disabled 00:04:43.932 EAL: Heap on socket 0 was expanded by 34MB 00:04:43.932 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.932 EAL: request: mp_malloc_sync 00:04:43.932 EAL: No shared files mode enabled, IPC is disabled 00:04:43.932 EAL: Heap on socket 0 was shrunk by 34MB 00:04:43.932 EAL: Trying to obtain current memory policy. 00:04:43.932 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.932 EAL: Restoring previous memory policy: 4 00:04:43.932 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.932 EAL: request: mp_malloc_sync 00:04:43.932 EAL: No shared files mode enabled, IPC is disabled 00:04:43.932 EAL: Heap on socket 0 was expanded by 66MB 00:04:43.932 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.932 EAL: request: mp_malloc_sync 00:04:43.932 EAL: No shared files mode enabled, IPC is disabled 00:04:43.932 EAL: Heap on socket 0 was shrunk by 66MB 00:04:43.932 EAL: Trying to obtain current memory policy. 00:04:43.932 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.932 EAL: Restoring previous memory policy: 4 00:04:43.932 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.932 EAL: request: mp_malloc_sync 00:04:43.932 EAL: No shared files mode enabled, IPC is disabled 00:04:43.932 EAL: Heap on socket 0 was expanded by 130MB 00:04:43.932 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.932 EAL: request: mp_malloc_sync 00:04:43.932 EAL: No shared files mode enabled, IPC is disabled 00:04:43.932 EAL: Heap on socket 0 was shrunk by 130MB 00:04:43.932 EAL: Trying to obtain current memory policy. 00:04:43.932 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.932 EAL: Restoring previous memory policy: 4 00:04:43.932 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.932 EAL: request: mp_malloc_sync 00:04:43.932 EAL: No shared files mode enabled, IPC is disabled 00:04:43.932 EAL: Heap on socket 0 was expanded by 258MB 00:04:43.932 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.932 EAL: request: mp_malloc_sync 00:04:43.932 EAL: No shared files mode enabled, IPC is disabled 00:04:43.932 EAL: Heap on socket 0 was shrunk by 258MB 00:04:43.932 EAL: Trying to obtain current memory policy. 00:04:43.932 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.191 EAL: Restoring previous memory policy: 4 00:04:44.191 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.191 EAL: request: mp_malloc_sync 00:04:44.191 EAL: No shared files mode enabled, IPC is disabled 00:04:44.191 EAL: Heap on socket 0 was expanded by 514MB 00:04:44.191 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.191 EAL: request: mp_malloc_sync 00:04:44.191 EAL: No shared files mode enabled, IPC is disabled 00:04:44.191 EAL: Heap on socket 0 was shrunk by 514MB 00:04:44.191 EAL: Trying to obtain current memory policy. 00:04:44.191 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.450 EAL: Restoring previous memory policy: 4 00:04:44.450 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.450 EAL: request: mp_malloc_sync 00:04:44.450 EAL: No shared files mode enabled, IPC is disabled 00:04:44.450 EAL: Heap on socket 0 was expanded by 1026MB 00:04:44.709 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.709 EAL: request: mp_malloc_sync 00:04:44.709 EAL: No shared files mode enabled, IPC is disabled 00:04:44.709 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:44.709 passed 00:04:44.709 00:04:44.709 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.709 suites 1 1 n/a 0 0 00:04:44.709 tests 2 2 2 0 0 00:04:44.709 asserts 497 497 497 0 n/a 00:04:44.709 00:04:44.709 Elapsed time = 0.967 seconds 00:04:44.709 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.709 EAL: request: mp_malloc_sync 00:04:44.709 EAL: No shared files mode enabled, IPC is disabled 00:04:44.709 EAL: Heap on socket 0 was shrunk by 2MB 00:04:44.709 EAL: No shared files mode enabled, IPC is disabled 00:04:44.709 EAL: No shared files mode enabled, IPC is disabled 00:04:44.709 EAL: No shared files mode enabled, IPC is disabled 00:04:44.709 00:04:44.709 real 0m1.087s 00:04:44.709 user 0m0.636s 00:04:44.709 sys 0m0.427s 00:04:44.709 07:40:29 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.709 07:40:29 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:44.709 ************************************ 00:04:44.709 END TEST env_vtophys 00:04:44.709 ************************************ 00:04:44.968 07:40:29 env -- common/autotest_common.sh@1142 -- # return 0 00:04:44.968 07:40:29 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:44.968 07:40:29 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.968 07:40:29 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.968 07:40:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.968 ************************************ 00:04:44.968 START TEST env_pci 00:04:44.968 ************************************ 00:04:44.968 07:40:29 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:44.968 00:04:44.968 00:04:44.968 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.968 http://cunit.sourceforge.net/ 00:04:44.968 00:04:44.968 00:04:44.968 Suite: pci 00:04:44.968 Test: pci_hook ...[2024-07-15 07:40:29.521709] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3067433 has claimed it 00:04:44.968 EAL: Cannot find device (10000:00:01.0) 00:04:44.968 EAL: Failed to attach device on primary process 00:04:44.968 passed 00:04:44.968 00:04:44.968 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.968 suites 1 1 n/a 0 0 00:04:44.968 tests 1 1 1 0 0 00:04:44.968 asserts 25 25 25 0 n/a 00:04:44.968 00:04:44.968 Elapsed time = 0.023 seconds 00:04:44.968 00:04:44.968 real 0m0.038s 00:04:44.968 user 0m0.013s 00:04:44.968 sys 0m0.025s 00:04:44.968 07:40:29 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.968 07:40:29 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:44.968 ************************************ 00:04:44.968 END TEST env_pci 00:04:44.968 ************************************ 00:04:44.968 07:40:29 env -- common/autotest_common.sh@1142 -- # return 0 00:04:44.968 07:40:29 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:44.968 07:40:29 env -- env/env.sh@15 -- # uname 00:04:44.968 07:40:29 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:44.968 07:40:29 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:44.968 07:40:29 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:44.968 07:40:29 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:44.968 07:40:29 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.968 07:40:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.968 ************************************ 00:04:44.968 START TEST env_dpdk_post_init 00:04:44.968 ************************************ 00:04:44.968 07:40:29 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:44.968 EAL: Detected CPU lcores: 96 00:04:44.968 EAL: Detected NUMA nodes: 2 00:04:44.968 EAL: Detected shared linkage of DPDK 00:04:44.968 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:44.968 EAL: Selected IOVA mode 'VA' 00:04:44.968 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.968 EAL: VFIO support initialized 00:04:44.968 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:45.227 EAL: Using IOMMU type 1 (Type 1) 00:04:45.227 EAL: Ignore mapping IO port bar(1) 00:04:45.227 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:45.227 EAL: Ignore mapping IO port bar(1) 00:04:45.227 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:45.227 EAL: Ignore mapping IO port bar(1) 00:04:45.227 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:45.227 EAL: Ignore mapping IO port bar(1) 00:04:45.227 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:45.227 EAL: Ignore mapping IO port bar(1) 00:04:45.227 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:45.227 EAL: Ignore mapping IO port bar(1) 00:04:45.227 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:45.227 EAL: Ignore mapping IO port bar(1) 00:04:45.227 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:45.227 EAL: Ignore mapping IO port bar(1) 00:04:45.227 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:46.163 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:46.163 EAL: Ignore mapping IO port bar(1) 00:04:46.163 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:46.163 EAL: Ignore mapping IO port bar(1) 00:04:46.163 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:46.163 EAL: Ignore mapping IO port bar(1) 00:04:46.163 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:46.163 EAL: Ignore mapping IO port bar(1) 00:04:46.163 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:46.163 EAL: Ignore mapping IO port bar(1) 00:04:46.163 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:46.163 EAL: Ignore mapping IO port bar(1) 00:04:46.163 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:46.163 EAL: Ignore mapping IO port bar(1) 00:04:46.163 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:46.163 EAL: Ignore mapping IO port bar(1) 00:04:46.163 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:49.449 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:49.449 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:49.449 Starting DPDK initialization... 00:04:49.449 Starting SPDK post initialization... 00:04:49.449 SPDK NVMe probe 00:04:49.449 Attaching to 0000:5e:00.0 00:04:49.449 Attached to 0000:5e:00.0 00:04:49.449 Cleaning up... 00:04:49.449 00:04:49.449 real 0m4.325s 00:04:49.449 user 0m3.268s 00:04:49.449 sys 0m0.132s 00:04:49.449 07:40:33 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.449 07:40:33 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:49.449 ************************************ 00:04:49.449 END TEST env_dpdk_post_init 00:04:49.449 ************************************ 00:04:49.449 07:40:33 env -- common/autotest_common.sh@1142 -- # return 0 00:04:49.449 07:40:33 env -- env/env.sh@26 -- # uname 00:04:49.449 07:40:33 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:49.449 07:40:33 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:49.449 07:40:33 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.449 07:40:33 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.449 07:40:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.449 ************************************ 00:04:49.449 START TEST env_mem_callbacks 00:04:49.449 ************************************ 00:04:49.449 07:40:34 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:49.449 EAL: Detected CPU lcores: 96 00:04:49.449 EAL: Detected NUMA nodes: 2 00:04:49.449 EAL: Detected shared linkage of DPDK 00:04:49.449 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:49.449 EAL: Selected IOVA mode 'VA' 00:04:49.449 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.449 EAL: VFIO support initialized 00:04:49.449 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:49.449 00:04:49.449 00:04:49.449 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.449 http://cunit.sourceforge.net/ 00:04:49.449 00:04:49.449 00:04:49.449 Suite: memory 00:04:49.449 Test: test ... 00:04:49.449 register 0x200000200000 2097152 00:04:49.449 malloc 3145728 00:04:49.449 register 0x200000400000 4194304 00:04:49.449 buf 0x200000500000 len 3145728 PASSED 00:04:49.449 malloc 64 00:04:49.449 buf 0x2000004fff40 len 64 PASSED 00:04:49.449 malloc 4194304 00:04:49.449 register 0x200000800000 6291456 00:04:49.449 buf 0x200000a00000 len 4194304 PASSED 00:04:49.449 free 0x200000500000 3145728 00:04:49.449 free 0x2000004fff40 64 00:04:49.449 unregister 0x200000400000 4194304 PASSED 00:04:49.449 free 0x200000a00000 4194304 00:04:49.449 unregister 0x200000800000 6291456 PASSED 00:04:49.449 malloc 8388608 00:04:49.449 register 0x200000400000 10485760 00:04:49.449 buf 0x200000600000 len 8388608 PASSED 00:04:49.449 free 0x200000600000 8388608 00:04:49.449 unregister 0x200000400000 10485760 PASSED 00:04:49.449 passed 00:04:49.449 00:04:49.449 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.449 suites 1 1 n/a 0 0 00:04:49.449 tests 1 1 1 0 0 00:04:49.449 asserts 15 15 15 0 n/a 00:04:49.449 00:04:49.449 Elapsed time = 0.008 seconds 00:04:49.449 00:04:49.449 real 0m0.058s 00:04:49.449 user 0m0.020s 00:04:49.449 sys 0m0.038s 00:04:49.449 07:40:34 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.449 07:40:34 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:49.449 ************************************ 00:04:49.449 END TEST env_mem_callbacks 00:04:49.449 ************************************ 00:04:49.449 07:40:34 env -- common/autotest_common.sh@1142 -- # return 0 00:04:49.449 00:04:49.449 real 0m6.089s 00:04:49.449 user 0m4.248s 00:04:49.449 sys 0m0.922s 00:04:49.449 07:40:34 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.449 07:40:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.449 ************************************ 00:04:49.449 END TEST env 00:04:49.449 ************************************ 00:04:49.449 07:40:34 -- common/autotest_common.sh@1142 -- # return 0 00:04:49.449 07:40:34 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:49.449 07:40:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.449 07:40:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.449 07:40:34 -- common/autotest_common.sh@10 -- # set +x 00:04:49.449 ************************************ 00:04:49.449 START TEST rpc 00:04:49.449 ************************************ 00:04:49.449 07:40:34 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:49.709 * Looking for test storage... 00:04:49.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:49.709 07:40:34 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3068248 00:04:49.709 07:40:34 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.709 07:40:34 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:49.709 07:40:34 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3068248 00:04:49.709 07:40:34 rpc -- common/autotest_common.sh@829 -- # '[' -z 3068248 ']' 00:04:49.709 07:40:34 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.709 07:40:34 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.709 07:40:34 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.709 07:40:34 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.709 07:40:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.709 [2024-07-15 07:40:34.317318] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:04:49.709 [2024-07-15 07:40:34.317366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3068248 ] 00:04:49.709 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.709 [2024-07-15 07:40:34.383957] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.968 [2024-07-15 07:40:34.463134] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:49.968 [2024-07-15 07:40:34.463169] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3068248' to capture a snapshot of events at runtime. 00:04:49.968 [2024-07-15 07:40:34.463176] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:49.968 [2024-07-15 07:40:34.463182] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:49.968 [2024-07-15 07:40:34.463186] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3068248 for offline analysis/debug. 00:04:49.968 [2024-07-15 07:40:34.463204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.536 07:40:35 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.537 07:40:35 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:50.537 07:40:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:50.537 07:40:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:50.537 07:40:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:50.537 07:40:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:50.537 07:40:35 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.537 07:40:35 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.537 07:40:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.537 ************************************ 00:04:50.537 START TEST rpc_integrity 00:04:50.537 ************************************ 00:04:50.537 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:50.537 07:40:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:50.537 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.537 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.537 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.537 07:40:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:50.537 07:40:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:50.537 07:40:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:50.537 07:40:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:50.537 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.537 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.537 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.537 07:40:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:50.537 07:40:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:50.537 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.537 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.537 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.537 07:40:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:50.537 { 00:04:50.537 "name": "Malloc0", 00:04:50.537 "aliases": [ 00:04:50.537 "7bd8b7a4-c69f-4d78-ba9c-754c835f5af2" 00:04:50.537 ], 00:04:50.537 "product_name": "Malloc disk", 00:04:50.537 "block_size": 512, 00:04:50.537 "num_blocks": 16384, 00:04:50.537 "uuid": "7bd8b7a4-c69f-4d78-ba9c-754c835f5af2", 00:04:50.537 "assigned_rate_limits": { 00:04:50.537 "rw_ios_per_sec": 0, 00:04:50.537 "rw_mbytes_per_sec": 0, 00:04:50.537 "r_mbytes_per_sec": 0, 00:04:50.537 "w_mbytes_per_sec": 0 00:04:50.537 }, 00:04:50.537 "claimed": false, 00:04:50.537 "zoned": false, 00:04:50.537 "supported_io_types": { 00:04:50.537 "read": true, 00:04:50.537 "write": true, 00:04:50.537 "unmap": true, 00:04:50.537 "flush": true, 00:04:50.537 "reset": true, 00:04:50.537 "nvme_admin": false, 00:04:50.537 "nvme_io": false, 00:04:50.537 "nvme_io_md": false, 00:04:50.537 "write_zeroes": true, 00:04:50.537 "zcopy": true, 00:04:50.537 "get_zone_info": false, 00:04:50.537 "zone_management": false, 00:04:50.537 "zone_append": false, 00:04:50.537 "compare": false, 00:04:50.537 "compare_and_write": false, 00:04:50.537 "abort": true, 00:04:50.537 "seek_hole": false, 00:04:50.537 "seek_data": false, 00:04:50.537 "copy": true, 00:04:50.537 "nvme_iov_md": false 00:04:50.537 }, 00:04:50.537 "memory_domains": [ 00:04:50.537 { 00:04:50.537 "dma_device_id": "system", 00:04:50.537 "dma_device_type": 1 00:04:50.537 }, 00:04:50.537 { 00:04:50.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.537 "dma_device_type": 2 00:04:50.537 } 00:04:50.537 ], 00:04:50.537 "driver_specific": {} 00:04:50.537 } 00:04:50.537 ]' 00:04:50.537 07:40:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:50.537 07:40:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:50.537 07:40:35 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:50.537 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.537 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.537 [2024-07-15 07:40:35.264215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:50.537 [2024-07-15 07:40:35.264247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:50.537 [2024-07-15 07:40:35.264261] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20f52d0 00:04:50.537 [2024-07-15 07:40:35.264266] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:50.537 [2024-07-15 07:40:35.265334] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:50.537 [2024-07-15 07:40:35.265353] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:50.537 Passthru0 00:04:50.537 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.537 07:40:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:50.537 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.537 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.797 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.797 07:40:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:50.797 { 00:04:50.797 "name": "Malloc0", 00:04:50.797 "aliases": [ 00:04:50.797 "7bd8b7a4-c69f-4d78-ba9c-754c835f5af2" 00:04:50.797 ], 00:04:50.797 "product_name": "Malloc disk", 00:04:50.797 "block_size": 512, 00:04:50.797 "num_blocks": 16384, 00:04:50.797 "uuid": "7bd8b7a4-c69f-4d78-ba9c-754c835f5af2", 00:04:50.797 "assigned_rate_limits": { 00:04:50.797 "rw_ios_per_sec": 0, 00:04:50.797 "rw_mbytes_per_sec": 0, 00:04:50.797 "r_mbytes_per_sec": 0, 00:04:50.797 "w_mbytes_per_sec": 0 00:04:50.797 }, 00:04:50.797 "claimed": true, 00:04:50.797 "claim_type": "exclusive_write", 00:04:50.797 "zoned": false, 00:04:50.797 "supported_io_types": { 00:04:50.797 "read": true, 00:04:50.797 "write": true, 00:04:50.797 "unmap": true, 00:04:50.797 "flush": true, 00:04:50.797 "reset": true, 00:04:50.797 "nvme_admin": false, 00:04:50.797 "nvme_io": false, 00:04:50.797 "nvme_io_md": false, 00:04:50.797 "write_zeroes": true, 00:04:50.797 "zcopy": true, 00:04:50.797 "get_zone_info": false, 00:04:50.797 "zone_management": false, 00:04:50.797 "zone_append": false, 00:04:50.797 "compare": false, 00:04:50.797 "compare_and_write": false, 00:04:50.797 "abort": true, 00:04:50.797 "seek_hole": false, 00:04:50.797 "seek_data": false, 00:04:50.797 "copy": true, 00:04:50.797 "nvme_iov_md": false 00:04:50.797 }, 00:04:50.797 "memory_domains": [ 00:04:50.797 { 00:04:50.797 "dma_device_id": "system", 00:04:50.797 "dma_device_type": 1 00:04:50.797 }, 00:04:50.797 { 00:04:50.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.797 "dma_device_type": 2 00:04:50.797 } 00:04:50.797 ], 00:04:50.797 "driver_specific": {} 00:04:50.797 }, 00:04:50.797 { 00:04:50.797 "name": "Passthru0", 00:04:50.797 "aliases": [ 00:04:50.797 "44b42947-ddd7-5e9b-b003-cd5d46b29428" 00:04:50.797 ], 00:04:50.797 "product_name": "passthru", 00:04:50.797 "block_size": 512, 00:04:50.797 "num_blocks": 16384, 00:04:50.797 "uuid": "44b42947-ddd7-5e9b-b003-cd5d46b29428", 00:04:50.797 "assigned_rate_limits": { 00:04:50.797 "rw_ios_per_sec": 0, 00:04:50.797 "rw_mbytes_per_sec": 0, 00:04:50.797 "r_mbytes_per_sec": 0, 00:04:50.797 "w_mbytes_per_sec": 0 00:04:50.797 }, 00:04:50.797 "claimed": false, 00:04:50.797 "zoned": false, 00:04:50.797 "supported_io_types": { 00:04:50.797 "read": true, 00:04:50.797 "write": true, 00:04:50.797 "unmap": true, 00:04:50.797 "flush": true, 00:04:50.797 "reset": true, 00:04:50.797 "nvme_admin": false, 00:04:50.797 "nvme_io": false, 00:04:50.797 "nvme_io_md": false, 00:04:50.797 "write_zeroes": true, 00:04:50.797 "zcopy": true, 00:04:50.797 "get_zone_info": false, 00:04:50.797 "zone_management": false, 00:04:50.797 "zone_append": false, 00:04:50.797 "compare": false, 00:04:50.797 "compare_and_write": false, 00:04:50.797 "abort": true, 00:04:50.797 "seek_hole": false, 00:04:50.797 "seek_data": false, 00:04:50.797 "copy": true, 00:04:50.797 "nvme_iov_md": false 00:04:50.797 }, 00:04:50.797 "memory_domains": [ 00:04:50.797 { 00:04:50.797 "dma_device_id": "system", 00:04:50.797 "dma_device_type": 1 00:04:50.797 }, 00:04:50.797 { 00:04:50.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.797 "dma_device_type": 2 00:04:50.797 } 00:04:50.797 ], 00:04:50.797 "driver_specific": { 00:04:50.797 "passthru": { 00:04:50.797 "name": "Passthru0", 00:04:50.797 "base_bdev_name": "Malloc0" 00:04:50.797 } 00:04:50.797 } 00:04:50.797 } 00:04:50.797 ]' 00:04:50.797 07:40:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:50.797 07:40:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:50.797 07:40:35 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:50.797 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.797 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.797 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.797 07:40:35 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:50.797 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.797 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.797 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.797 07:40:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:50.797 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.798 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.798 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.798 07:40:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:50.798 07:40:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:50.798 07:40:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:50.798 00:04:50.798 real 0m0.271s 00:04:50.798 user 0m0.170s 00:04:50.798 sys 0m0.034s 00:04:50.798 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.798 07:40:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.798 ************************************ 00:04:50.798 END TEST rpc_integrity 00:04:50.798 ************************************ 00:04:50.798 07:40:35 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:50.798 07:40:35 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:50.798 07:40:35 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.798 07:40:35 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.798 07:40:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.798 ************************************ 00:04:50.798 START TEST rpc_plugins 00:04:50.798 ************************************ 00:04:50.798 07:40:35 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:50.798 07:40:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:50.798 07:40:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.798 07:40:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:50.798 07:40:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.798 07:40:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:50.798 07:40:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:50.798 07:40:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.798 07:40:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:50.798 07:40:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.798 07:40:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:50.798 { 00:04:50.798 "name": "Malloc1", 00:04:50.798 "aliases": [ 00:04:50.798 "cef58a29-9c2f-4de9-ada8-28bcb6f9b61a" 00:04:50.798 ], 00:04:50.798 "product_name": "Malloc disk", 00:04:50.798 "block_size": 4096, 00:04:50.798 "num_blocks": 256, 00:04:50.798 "uuid": "cef58a29-9c2f-4de9-ada8-28bcb6f9b61a", 00:04:50.798 "assigned_rate_limits": { 00:04:50.798 "rw_ios_per_sec": 0, 00:04:50.798 "rw_mbytes_per_sec": 0, 00:04:50.798 "r_mbytes_per_sec": 0, 00:04:50.798 "w_mbytes_per_sec": 0 00:04:50.798 }, 00:04:50.798 "claimed": false, 00:04:50.798 "zoned": false, 00:04:50.798 "supported_io_types": { 00:04:50.798 "read": true, 00:04:50.798 "write": true, 00:04:50.798 "unmap": true, 00:04:50.798 "flush": true, 00:04:50.798 "reset": true, 00:04:50.798 "nvme_admin": false, 00:04:50.798 "nvme_io": false, 00:04:50.798 "nvme_io_md": false, 00:04:50.798 "write_zeroes": true, 00:04:50.798 "zcopy": true, 00:04:50.798 "get_zone_info": false, 00:04:50.798 "zone_management": false, 00:04:50.798 "zone_append": false, 00:04:50.798 "compare": false, 00:04:50.798 "compare_and_write": false, 00:04:50.798 "abort": true, 00:04:50.798 "seek_hole": false, 00:04:50.798 "seek_data": false, 00:04:50.798 "copy": true, 00:04:50.798 "nvme_iov_md": false 00:04:50.798 }, 00:04:50.798 "memory_domains": [ 00:04:50.798 { 00:04:50.798 "dma_device_id": "system", 00:04:50.798 "dma_device_type": 1 00:04:50.798 }, 00:04:50.798 { 00:04:50.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.798 "dma_device_type": 2 00:04:50.798 } 00:04:50.798 ], 00:04:50.798 "driver_specific": {} 00:04:50.798 } 00:04:50.798 ]' 00:04:50.798 07:40:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:51.058 07:40:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:51.058 07:40:35 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:51.058 07:40:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.058 07:40:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.058 07:40:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.058 07:40:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:51.058 07:40:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.058 07:40:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.058 07:40:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.058 07:40:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:51.058 07:40:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:51.058 07:40:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:51.058 00:04:51.058 real 0m0.142s 00:04:51.058 user 0m0.087s 00:04:51.058 sys 0m0.019s 00:04:51.058 07:40:35 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.058 07:40:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.058 ************************************ 00:04:51.058 END TEST rpc_plugins 00:04:51.058 ************************************ 00:04:51.058 07:40:35 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:51.058 07:40:35 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:51.058 07:40:35 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.058 07:40:35 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.058 07:40:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.058 ************************************ 00:04:51.058 START TEST rpc_trace_cmd_test 00:04:51.058 ************************************ 00:04:51.058 07:40:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:51.058 07:40:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:51.058 07:40:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:51.058 07:40:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.058 07:40:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:51.058 07:40:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.058 07:40:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:51.058 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3068248", 00:04:51.058 "tpoint_group_mask": "0x8", 00:04:51.058 "iscsi_conn": { 00:04:51.058 "mask": "0x2", 00:04:51.058 "tpoint_mask": "0x0" 00:04:51.058 }, 00:04:51.058 "scsi": { 00:04:51.058 "mask": "0x4", 00:04:51.058 "tpoint_mask": "0x0" 00:04:51.058 }, 00:04:51.058 "bdev": { 00:04:51.058 "mask": "0x8", 00:04:51.058 "tpoint_mask": "0xffffffffffffffff" 00:04:51.058 }, 00:04:51.058 "nvmf_rdma": { 00:04:51.058 "mask": "0x10", 00:04:51.058 "tpoint_mask": "0x0" 00:04:51.058 }, 00:04:51.058 "nvmf_tcp": { 00:04:51.058 "mask": "0x20", 00:04:51.058 "tpoint_mask": "0x0" 00:04:51.058 }, 00:04:51.058 "ftl": { 00:04:51.058 "mask": "0x40", 00:04:51.058 "tpoint_mask": "0x0" 00:04:51.058 }, 00:04:51.058 "blobfs": { 00:04:51.058 "mask": "0x80", 00:04:51.058 "tpoint_mask": "0x0" 00:04:51.058 }, 00:04:51.058 "dsa": { 00:04:51.058 "mask": "0x200", 00:04:51.059 "tpoint_mask": "0x0" 00:04:51.059 }, 00:04:51.059 "thread": { 00:04:51.059 "mask": "0x400", 00:04:51.059 "tpoint_mask": "0x0" 00:04:51.059 }, 00:04:51.059 "nvme_pcie": { 00:04:51.059 "mask": "0x800", 00:04:51.059 "tpoint_mask": "0x0" 00:04:51.059 }, 00:04:51.059 "iaa": { 00:04:51.059 "mask": "0x1000", 00:04:51.059 "tpoint_mask": "0x0" 00:04:51.059 }, 00:04:51.059 "nvme_tcp": { 00:04:51.059 "mask": "0x2000", 00:04:51.059 "tpoint_mask": "0x0" 00:04:51.059 }, 00:04:51.059 "bdev_nvme": { 00:04:51.059 "mask": "0x4000", 00:04:51.059 "tpoint_mask": "0x0" 00:04:51.059 }, 00:04:51.059 "sock": { 00:04:51.059 "mask": "0x8000", 00:04:51.059 "tpoint_mask": "0x0" 00:04:51.059 } 00:04:51.059 }' 00:04:51.059 07:40:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:51.059 07:40:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:51.059 07:40:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:51.059 07:40:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:51.059 07:40:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:51.318 07:40:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:51.318 07:40:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:51.318 07:40:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:51.318 07:40:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:51.318 07:40:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:51.318 00:04:51.318 real 0m0.205s 00:04:51.318 user 0m0.173s 00:04:51.318 sys 0m0.021s 00:04:51.318 07:40:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.318 07:40:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:51.318 ************************************ 00:04:51.318 END TEST rpc_trace_cmd_test 00:04:51.318 ************************************ 00:04:51.318 07:40:35 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:51.318 07:40:35 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:51.318 07:40:35 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:51.318 07:40:35 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:51.318 07:40:35 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.318 07:40:35 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.318 07:40:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.318 ************************************ 00:04:51.318 START TEST rpc_daemon_integrity 00:04:51.318 ************************************ 00:04:51.318 07:40:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:51.318 07:40:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:51.318 07:40:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.318 07:40:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.318 07:40:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.318 07:40:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:51.318 07:40:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:51.318 07:40:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:51.318 07:40:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:51.318 07:40:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.318 07:40:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.318 07:40:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.318 07:40:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:51.318 07:40:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:51.318 07:40:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.318 07:40:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.318 07:40:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.318 07:40:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:51.318 { 00:04:51.318 "name": "Malloc2", 00:04:51.318 "aliases": [ 00:04:51.318 "d76ac085-8beb-483c-8f4c-5a893c8ec93d" 00:04:51.318 ], 00:04:51.318 "product_name": "Malloc disk", 00:04:51.318 "block_size": 512, 00:04:51.318 "num_blocks": 16384, 00:04:51.318 "uuid": "d76ac085-8beb-483c-8f4c-5a893c8ec93d", 00:04:51.318 "assigned_rate_limits": { 00:04:51.318 "rw_ios_per_sec": 0, 00:04:51.318 "rw_mbytes_per_sec": 0, 00:04:51.318 "r_mbytes_per_sec": 0, 00:04:51.318 "w_mbytes_per_sec": 0 00:04:51.318 }, 00:04:51.318 "claimed": false, 00:04:51.318 "zoned": false, 00:04:51.318 "supported_io_types": { 00:04:51.318 "read": true, 00:04:51.318 "write": true, 00:04:51.318 "unmap": true, 00:04:51.318 "flush": true, 00:04:51.318 "reset": true, 00:04:51.318 "nvme_admin": false, 00:04:51.318 "nvme_io": false, 00:04:51.318 "nvme_io_md": false, 00:04:51.318 "write_zeroes": true, 00:04:51.318 "zcopy": true, 00:04:51.318 "get_zone_info": false, 00:04:51.318 "zone_management": false, 00:04:51.318 "zone_append": false, 00:04:51.318 "compare": false, 00:04:51.318 "compare_and_write": false, 00:04:51.318 "abort": true, 00:04:51.318 "seek_hole": false, 00:04:51.318 "seek_data": false, 00:04:51.318 "copy": true, 00:04:51.318 "nvme_iov_md": false 00:04:51.318 }, 00:04:51.318 "memory_domains": [ 00:04:51.318 { 00:04:51.318 "dma_device_id": "system", 00:04:51.318 "dma_device_type": 1 00:04:51.318 }, 00:04:51.318 { 00:04:51.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.318 "dma_device_type": 2 00:04:51.318 } 00:04:51.318 ], 00:04:51.318 "driver_specific": {} 00:04:51.318 } 00:04:51.318 ]' 00:04:51.318 07:40:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:51.578 07:40:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:51.578 07:40:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:51.578 07:40:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.578 07:40:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.578 [2024-07-15 07:40:36.078425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:51.578 [2024-07-15 07:40:36.078451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:51.578 [2024-07-15 07:40:36.078462] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x228cac0 00:04:51.578 [2024-07-15 07:40:36.078468] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:51.578 [2024-07-15 07:40:36.079415] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:51.578 [2024-07-15 07:40:36.079435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:51.578 Passthru0 00:04:51.578 07:40:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.578 07:40:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:51.578 07:40:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.578 07:40:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.578 07:40:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.578 07:40:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:51.578 { 00:04:51.578 "name": "Malloc2", 00:04:51.578 "aliases": [ 00:04:51.578 "d76ac085-8beb-483c-8f4c-5a893c8ec93d" 00:04:51.578 ], 00:04:51.578 "product_name": "Malloc disk", 00:04:51.578 "block_size": 512, 00:04:51.578 "num_blocks": 16384, 00:04:51.578 "uuid": "d76ac085-8beb-483c-8f4c-5a893c8ec93d", 00:04:51.578 "assigned_rate_limits": { 00:04:51.578 "rw_ios_per_sec": 0, 00:04:51.578 "rw_mbytes_per_sec": 0, 00:04:51.578 "r_mbytes_per_sec": 0, 00:04:51.578 "w_mbytes_per_sec": 0 00:04:51.578 }, 00:04:51.578 "claimed": true, 00:04:51.578 "claim_type": "exclusive_write", 00:04:51.578 "zoned": false, 00:04:51.578 "supported_io_types": { 00:04:51.578 "read": true, 00:04:51.578 "write": true, 00:04:51.578 "unmap": true, 00:04:51.578 "flush": true, 00:04:51.578 "reset": true, 00:04:51.578 "nvme_admin": false, 00:04:51.578 "nvme_io": false, 00:04:51.578 "nvme_io_md": false, 00:04:51.578 "write_zeroes": true, 00:04:51.578 "zcopy": true, 00:04:51.578 "get_zone_info": false, 00:04:51.578 "zone_management": false, 00:04:51.578 "zone_append": false, 00:04:51.578 "compare": false, 00:04:51.578 "compare_and_write": false, 00:04:51.578 "abort": true, 00:04:51.578 "seek_hole": false, 00:04:51.578 "seek_data": false, 00:04:51.579 "copy": true, 00:04:51.579 "nvme_iov_md": false 00:04:51.579 }, 00:04:51.579 "memory_domains": [ 00:04:51.579 { 00:04:51.579 "dma_device_id": "system", 00:04:51.579 "dma_device_type": 1 00:04:51.579 }, 00:04:51.579 { 00:04:51.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.579 "dma_device_type": 2 00:04:51.579 } 00:04:51.579 ], 00:04:51.579 "driver_specific": {} 00:04:51.579 }, 00:04:51.579 { 00:04:51.579 "name": "Passthru0", 00:04:51.579 "aliases": [ 00:04:51.579 "92adcb46-fa4e-51c8-993c-cfbf92db384b" 00:04:51.579 ], 00:04:51.579 "product_name": "passthru", 00:04:51.579 "block_size": 512, 00:04:51.579 "num_blocks": 16384, 00:04:51.579 "uuid": "92adcb46-fa4e-51c8-993c-cfbf92db384b", 00:04:51.579 "assigned_rate_limits": { 00:04:51.579 "rw_ios_per_sec": 0, 00:04:51.579 "rw_mbytes_per_sec": 0, 00:04:51.579 "r_mbytes_per_sec": 0, 00:04:51.579 "w_mbytes_per_sec": 0 00:04:51.579 }, 00:04:51.579 "claimed": false, 00:04:51.579 "zoned": false, 00:04:51.579 "supported_io_types": { 00:04:51.579 "read": true, 00:04:51.579 "write": true, 00:04:51.579 "unmap": true, 00:04:51.579 "flush": true, 00:04:51.579 "reset": true, 00:04:51.579 "nvme_admin": false, 00:04:51.579 "nvme_io": false, 00:04:51.579 "nvme_io_md": false, 00:04:51.579 "write_zeroes": true, 00:04:51.579 "zcopy": true, 00:04:51.579 "get_zone_info": false, 00:04:51.579 "zone_management": false, 00:04:51.579 "zone_append": false, 00:04:51.579 "compare": false, 00:04:51.579 "compare_and_write": false, 00:04:51.579 "abort": true, 00:04:51.579 "seek_hole": false, 00:04:51.579 "seek_data": false, 00:04:51.579 "copy": true, 00:04:51.579 "nvme_iov_md": false 00:04:51.579 }, 00:04:51.579 "memory_domains": [ 00:04:51.579 { 00:04:51.579 "dma_device_id": "system", 00:04:51.579 "dma_device_type": 1 00:04:51.579 }, 00:04:51.579 { 00:04:51.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.579 "dma_device_type": 2 00:04:51.579 } 00:04:51.579 ], 00:04:51.579 "driver_specific": { 00:04:51.579 "passthru": { 00:04:51.579 "name": "Passthru0", 00:04:51.579 "base_bdev_name": "Malloc2" 00:04:51.579 } 00:04:51.579 } 00:04:51.579 } 00:04:51.579 ]' 00:04:51.579 07:40:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:51.579 07:40:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:51.579 07:40:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:51.579 07:40:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.579 07:40:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.579 07:40:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.579 07:40:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:51.579 07:40:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.579 07:40:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.579 07:40:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.579 07:40:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:51.579 07:40:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.579 07:40:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.579 07:40:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.579 07:40:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:51.579 07:40:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:51.579 07:40:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:51.579 00:04:51.579 real 0m0.255s 00:04:51.579 user 0m0.163s 00:04:51.579 sys 0m0.033s 00:04:51.579 07:40:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.579 07:40:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.579 ************************************ 00:04:51.579 END TEST rpc_daemon_integrity 00:04:51.579 ************************************ 00:04:51.579 07:40:36 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:51.579 07:40:36 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:51.579 07:40:36 rpc -- rpc/rpc.sh@84 -- # killprocess 3068248 00:04:51.579 07:40:36 rpc -- common/autotest_common.sh@948 -- # '[' -z 3068248 ']' 00:04:51.579 07:40:36 rpc -- common/autotest_common.sh@952 -- # kill -0 3068248 00:04:51.579 07:40:36 rpc -- common/autotest_common.sh@953 -- # uname 00:04:51.579 07:40:36 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:51.579 07:40:36 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3068248 00:04:51.579 07:40:36 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:51.579 07:40:36 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:51.579 07:40:36 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3068248' 00:04:51.579 killing process with pid 3068248 00:04:51.579 07:40:36 rpc -- common/autotest_common.sh@967 -- # kill 3068248 00:04:51.579 07:40:36 rpc -- common/autotest_common.sh@972 -- # wait 3068248 00:04:51.838 00:04:51.838 real 0m2.412s 00:04:51.838 user 0m3.100s 00:04:51.838 sys 0m0.653s 00:04:51.838 07:40:36 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.838 07:40:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.838 ************************************ 00:04:51.838 END TEST rpc 00:04:51.838 ************************************ 00:04:52.098 07:40:36 -- common/autotest_common.sh@1142 -- # return 0 00:04:52.098 07:40:36 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:52.098 07:40:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.098 07:40:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.098 07:40:36 -- common/autotest_common.sh@10 -- # set +x 00:04:52.098 ************************************ 00:04:52.098 START TEST skip_rpc 00:04:52.098 ************************************ 00:04:52.098 07:40:36 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:52.098 * Looking for test storage... 00:04:52.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:52.098 07:40:36 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:52.098 07:40:36 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:52.098 07:40:36 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:52.098 07:40:36 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.098 07:40:36 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.098 07:40:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.098 ************************************ 00:04:52.098 START TEST skip_rpc 00:04:52.098 ************************************ 00:04:52.098 07:40:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:52.098 07:40:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3068883 00:04:52.098 07:40:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:52.098 07:40:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.098 07:40:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:52.098 [2024-07-15 07:40:36.825907] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:04:52.098 [2024-07-15 07:40:36.825946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3068883 ] 00:04:52.098 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.357 [2024-07-15 07:40:36.890501] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.357 [2024-07-15 07:40:36.962127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3068883 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 3068883 ']' 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 3068883 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3068883 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3068883' 00:04:57.650 killing process with pid 3068883 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 3068883 00:04:57.650 07:40:41 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 3068883 00:04:57.650 00:04:57.650 real 0m5.360s 00:04:57.650 user 0m5.131s 00:04:57.650 sys 0m0.258s 00:04:57.650 07:40:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.650 07:40:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.650 ************************************ 00:04:57.650 END TEST skip_rpc 00:04:57.650 ************************************ 00:04:57.650 07:40:42 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:57.650 07:40:42 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:57.650 07:40:42 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.650 07:40:42 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.650 07:40:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.650 ************************************ 00:04:57.650 START TEST skip_rpc_with_json 00:04:57.650 ************************************ 00:04:57.650 07:40:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:57.650 07:40:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:57.650 07:40:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3069829 00:04:57.650 07:40:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.650 07:40:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.650 07:40:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3069829 00:04:57.650 07:40:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 3069829 ']' 00:04:57.650 07:40:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.650 07:40:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.650 07:40:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.650 07:40:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.650 07:40:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.650 [2024-07-15 07:40:42.253584] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:04:57.650 [2024-07-15 07:40:42.253624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3069829 ] 00:04:57.650 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.650 [2024-07-15 07:40:42.320397] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.650 [2024-07-15 07:40:42.399904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.600 07:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.601 [2024-07-15 07:40:43.059216] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:58.601 request: 00:04:58.601 { 00:04:58.601 "trtype": "tcp", 00:04:58.601 "method": "nvmf_get_transports", 00:04:58.601 "req_id": 1 00:04:58.601 } 00:04:58.601 Got JSON-RPC error response 00:04:58.601 response: 00:04:58.601 { 00:04:58.601 "code": -19, 00:04:58.601 "message": "No such device" 00:04:58.601 } 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.601 [2024-07-15 07:40:43.067315] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:58.601 { 00:04:58.601 "subsystems": [ 00:04:58.601 { 00:04:58.601 "subsystem": "vfio_user_target", 00:04:58.601 "config": null 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "subsystem": "keyring", 00:04:58.601 "config": [] 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "subsystem": "iobuf", 00:04:58.601 "config": [ 00:04:58.601 { 00:04:58.601 "method": "iobuf_set_options", 00:04:58.601 "params": { 00:04:58.601 "small_pool_count": 8192, 00:04:58.601 "large_pool_count": 1024, 00:04:58.601 "small_bufsize": 8192, 00:04:58.601 "large_bufsize": 135168 00:04:58.601 } 00:04:58.601 } 00:04:58.601 ] 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "subsystem": "sock", 00:04:58.601 "config": [ 00:04:58.601 { 00:04:58.601 "method": "sock_set_default_impl", 00:04:58.601 "params": { 00:04:58.601 "impl_name": "posix" 00:04:58.601 } 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "method": "sock_impl_set_options", 00:04:58.601 "params": { 00:04:58.601 "impl_name": "ssl", 00:04:58.601 "recv_buf_size": 4096, 00:04:58.601 "send_buf_size": 4096, 00:04:58.601 "enable_recv_pipe": true, 00:04:58.601 "enable_quickack": false, 00:04:58.601 "enable_placement_id": 0, 00:04:58.601 "enable_zerocopy_send_server": true, 00:04:58.601 "enable_zerocopy_send_client": false, 00:04:58.601 "zerocopy_threshold": 0, 00:04:58.601 "tls_version": 0, 00:04:58.601 "enable_ktls": false 00:04:58.601 } 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "method": "sock_impl_set_options", 00:04:58.601 "params": { 00:04:58.601 "impl_name": "posix", 00:04:58.601 "recv_buf_size": 2097152, 00:04:58.601 "send_buf_size": 2097152, 00:04:58.601 "enable_recv_pipe": true, 00:04:58.601 "enable_quickack": false, 00:04:58.601 "enable_placement_id": 0, 00:04:58.601 "enable_zerocopy_send_server": true, 00:04:58.601 "enable_zerocopy_send_client": false, 00:04:58.601 "zerocopy_threshold": 0, 00:04:58.601 "tls_version": 0, 00:04:58.601 "enable_ktls": false 00:04:58.601 } 00:04:58.601 } 00:04:58.601 ] 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "subsystem": "vmd", 00:04:58.601 "config": [] 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "subsystem": "accel", 00:04:58.601 "config": [ 00:04:58.601 { 00:04:58.601 "method": "accel_set_options", 00:04:58.601 "params": { 00:04:58.601 "small_cache_size": 128, 00:04:58.601 "large_cache_size": 16, 00:04:58.601 "task_count": 2048, 00:04:58.601 "sequence_count": 2048, 00:04:58.601 "buf_count": 2048 00:04:58.601 } 00:04:58.601 } 00:04:58.601 ] 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "subsystem": "bdev", 00:04:58.601 "config": [ 00:04:58.601 { 00:04:58.601 "method": "bdev_set_options", 00:04:58.601 "params": { 00:04:58.601 "bdev_io_pool_size": 65535, 00:04:58.601 "bdev_io_cache_size": 256, 00:04:58.601 "bdev_auto_examine": true, 00:04:58.601 "iobuf_small_cache_size": 128, 00:04:58.601 "iobuf_large_cache_size": 16 00:04:58.601 } 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "method": "bdev_raid_set_options", 00:04:58.601 "params": { 00:04:58.601 "process_window_size_kb": 1024 00:04:58.601 } 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "method": "bdev_iscsi_set_options", 00:04:58.601 "params": { 00:04:58.601 "timeout_sec": 30 00:04:58.601 } 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "method": "bdev_nvme_set_options", 00:04:58.601 "params": { 00:04:58.601 "action_on_timeout": "none", 00:04:58.601 "timeout_us": 0, 00:04:58.601 "timeout_admin_us": 0, 00:04:58.601 "keep_alive_timeout_ms": 10000, 00:04:58.601 "arbitration_burst": 0, 00:04:58.601 "low_priority_weight": 0, 00:04:58.601 "medium_priority_weight": 0, 00:04:58.601 "high_priority_weight": 0, 00:04:58.601 "nvme_adminq_poll_period_us": 10000, 00:04:58.601 "nvme_ioq_poll_period_us": 0, 00:04:58.601 "io_queue_requests": 0, 00:04:58.601 "delay_cmd_submit": true, 00:04:58.601 "transport_retry_count": 4, 00:04:58.601 "bdev_retry_count": 3, 00:04:58.601 "transport_ack_timeout": 0, 00:04:58.601 "ctrlr_loss_timeout_sec": 0, 00:04:58.601 "reconnect_delay_sec": 0, 00:04:58.601 "fast_io_fail_timeout_sec": 0, 00:04:58.601 "disable_auto_failback": false, 00:04:58.601 "generate_uuids": false, 00:04:58.601 "transport_tos": 0, 00:04:58.601 "nvme_error_stat": false, 00:04:58.601 "rdma_srq_size": 0, 00:04:58.601 "io_path_stat": false, 00:04:58.601 "allow_accel_sequence": false, 00:04:58.601 "rdma_max_cq_size": 0, 00:04:58.601 "rdma_cm_event_timeout_ms": 0, 00:04:58.601 "dhchap_digests": [ 00:04:58.601 "sha256", 00:04:58.601 "sha384", 00:04:58.601 "sha512" 00:04:58.601 ], 00:04:58.601 "dhchap_dhgroups": [ 00:04:58.601 "null", 00:04:58.601 "ffdhe2048", 00:04:58.601 "ffdhe3072", 00:04:58.601 "ffdhe4096", 00:04:58.601 "ffdhe6144", 00:04:58.601 "ffdhe8192" 00:04:58.601 ] 00:04:58.601 } 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "method": "bdev_nvme_set_hotplug", 00:04:58.601 "params": { 00:04:58.601 "period_us": 100000, 00:04:58.601 "enable": false 00:04:58.601 } 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "method": "bdev_wait_for_examine" 00:04:58.601 } 00:04:58.601 ] 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "subsystem": "scsi", 00:04:58.601 "config": null 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "subsystem": "scheduler", 00:04:58.601 "config": [ 00:04:58.601 { 00:04:58.601 "method": "framework_set_scheduler", 00:04:58.601 "params": { 00:04:58.601 "name": "static" 00:04:58.601 } 00:04:58.601 } 00:04:58.601 ] 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "subsystem": "vhost_scsi", 00:04:58.601 "config": [] 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "subsystem": "vhost_blk", 00:04:58.601 "config": [] 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "subsystem": "ublk", 00:04:58.601 "config": [] 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "subsystem": "nbd", 00:04:58.601 "config": [] 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "subsystem": "nvmf", 00:04:58.601 "config": [ 00:04:58.601 { 00:04:58.601 "method": "nvmf_set_config", 00:04:58.601 "params": { 00:04:58.601 "discovery_filter": "match_any", 00:04:58.601 "admin_cmd_passthru": { 00:04:58.601 "identify_ctrlr": false 00:04:58.601 } 00:04:58.601 } 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "method": "nvmf_set_max_subsystems", 00:04:58.601 "params": { 00:04:58.601 "max_subsystems": 1024 00:04:58.601 } 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "method": "nvmf_set_crdt", 00:04:58.601 "params": { 00:04:58.601 "crdt1": 0, 00:04:58.601 "crdt2": 0, 00:04:58.601 "crdt3": 0 00:04:58.601 } 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "method": "nvmf_create_transport", 00:04:58.601 "params": { 00:04:58.601 "trtype": "TCP", 00:04:58.601 "max_queue_depth": 128, 00:04:58.601 "max_io_qpairs_per_ctrlr": 127, 00:04:58.601 "in_capsule_data_size": 4096, 00:04:58.601 "max_io_size": 131072, 00:04:58.601 "io_unit_size": 131072, 00:04:58.601 "max_aq_depth": 128, 00:04:58.601 "num_shared_buffers": 511, 00:04:58.601 "buf_cache_size": 4294967295, 00:04:58.601 "dif_insert_or_strip": false, 00:04:58.601 "zcopy": false, 00:04:58.601 "c2h_success": true, 00:04:58.601 "sock_priority": 0, 00:04:58.601 "abort_timeout_sec": 1, 00:04:58.601 "ack_timeout": 0, 00:04:58.601 "data_wr_pool_size": 0 00:04:58.601 } 00:04:58.601 } 00:04:58.601 ] 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "subsystem": "iscsi", 00:04:58.601 "config": [ 00:04:58.601 { 00:04:58.601 "method": "iscsi_set_options", 00:04:58.601 "params": { 00:04:58.601 "node_base": "iqn.2016-06.io.spdk", 00:04:58.601 "max_sessions": 128, 00:04:58.601 "max_connections_per_session": 2, 00:04:58.601 "max_queue_depth": 64, 00:04:58.601 "default_time2wait": 2, 00:04:58.601 "default_time2retain": 20, 00:04:58.601 "first_burst_length": 8192, 00:04:58.601 "immediate_data": true, 00:04:58.601 "allow_duplicated_isid": false, 00:04:58.601 "error_recovery_level": 0, 00:04:58.601 "nop_timeout": 60, 00:04:58.601 "nop_in_interval": 30, 00:04:58.601 "disable_chap": false, 00:04:58.601 "require_chap": false, 00:04:58.601 "mutual_chap": false, 00:04:58.601 "chap_group": 0, 00:04:58.601 "max_large_datain_per_connection": 64, 00:04:58.601 "max_r2t_per_connection": 4, 00:04:58.601 "pdu_pool_size": 36864, 00:04:58.601 "immediate_data_pool_size": 16384, 00:04:58.601 "data_out_pool_size": 2048 00:04:58.601 } 00:04:58.601 } 00:04:58.601 ] 00:04:58.601 } 00:04:58.601 ] 00:04:58.601 } 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3069829 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3069829 ']' 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3069829 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3069829 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3069829' 00:04:58.601 killing process with pid 3069829 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3069829 00:04:58.601 07:40:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3069829 00:04:58.863 07:40:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3070076 00:04:58.863 07:40:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:58.863 07:40:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:04.131 07:40:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3070076 00:05:04.131 07:40:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3070076 ']' 00:05:04.131 07:40:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3070076 00:05:04.131 07:40:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:04.131 07:40:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:04.131 07:40:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3070076 00:05:04.131 07:40:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:04.131 07:40:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:04.131 07:40:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3070076' 00:05:04.131 killing process with pid 3070076 00:05:04.131 07:40:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3070076 00:05:04.131 07:40:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3070076 00:05:04.390 07:40:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:04.390 07:40:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:04.390 00:05:04.390 real 0m6.741s 00:05:04.390 user 0m6.554s 00:05:04.391 sys 0m0.614s 00:05:04.391 07:40:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.391 07:40:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:04.391 ************************************ 00:05:04.391 END TEST skip_rpc_with_json 00:05:04.391 ************************************ 00:05:04.391 07:40:48 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:04.391 07:40:48 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:04.391 07:40:48 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.391 07:40:48 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.391 07:40:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.391 ************************************ 00:05:04.391 START TEST skip_rpc_with_delay 00:05:04.391 ************************************ 00:05:04.391 07:40:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:04.391 07:40:49 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.391 07:40:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:04.391 07:40:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.391 07:40:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.391 07:40:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:04.391 07:40:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.391 07:40:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:04.391 07:40:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.391 07:40:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:04.391 07:40:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.391 07:40:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:04.391 07:40:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.391 [2024-07-15 07:40:49.063655] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:04.391 [2024-07-15 07:40:49.063709] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:04.391 07:40:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:04.391 07:40:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:04.391 07:40:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:04.391 07:40:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:04.391 00:05:04.391 real 0m0.063s 00:05:04.391 user 0m0.042s 00:05:04.391 sys 0m0.020s 00:05:04.391 07:40:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.391 07:40:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:04.391 ************************************ 00:05:04.391 END TEST skip_rpc_with_delay 00:05:04.391 ************************************ 00:05:04.391 07:40:49 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:04.391 07:40:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:04.391 07:40:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:04.391 07:40:49 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:04.391 07:40:49 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.391 07:40:49 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.391 07:40:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.650 ************************************ 00:05:04.650 START TEST exit_on_failed_rpc_init 00:05:04.650 ************************************ 00:05:04.650 07:40:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:04.650 07:40:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3071045 00:05:04.650 07:40:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3071045 00:05:04.650 07:40:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:04.650 07:40:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 3071045 ']' 00:05:04.650 07:40:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.650 07:40:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.650 07:40:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.650 07:40:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.650 07:40:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:04.650 [2024-07-15 07:40:49.195133] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:05:04.650 [2024-07-15 07:40:49.195174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071045 ] 00:05:04.650 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.650 [2024-07-15 07:40:49.259350] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.650 [2024-07-15 07:40:49.338816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.587 07:40:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.587 07:40:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:05.587 07:40:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.587 07:40:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:05.587 07:40:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:05.587 07:40:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:05.587 07:40:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.587 07:40:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:05.587 07:40:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.587 07:40:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:05.587 07:40:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.587 07:40:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:05.587 07:40:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.587 07:40:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:05.587 07:40:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:05.587 [2024-07-15 07:40:50.054068] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:05:05.587 [2024-07-15 07:40:50.054120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071275 ] 00:05:05.587 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.587 [2024-07-15 07:40:50.127479] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.587 [2024-07-15 07:40:50.201102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.587 [2024-07-15 07:40:50.201170] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:05.587 [2024-07-15 07:40:50.201186] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:05.587 [2024-07-15 07:40:50.201192] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:05.587 07:40:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:05.587 07:40:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:05.587 07:40:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:05.587 07:40:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:05.587 07:40:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:05.587 07:40:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:05.587 07:40:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:05.587 07:40:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3071045 00:05:05.587 07:40:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 3071045 ']' 00:05:05.587 07:40:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 3071045 00:05:05.587 07:40:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:05.587 07:40:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:05.587 07:40:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3071045 00:05:05.587 07:40:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:05.587 07:40:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:05.587 07:40:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3071045' 00:05:05.587 killing process with pid 3071045 00:05:05.587 07:40:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 3071045 00:05:05.587 07:40:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 3071045 00:05:05.886 00:05:05.886 real 0m1.486s 00:05:05.886 user 0m1.707s 00:05:05.886 sys 0m0.422s 00:05:05.886 07:40:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.886 07:40:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:05.886 ************************************ 00:05:05.886 END TEST exit_on_failed_rpc_init 00:05:05.886 ************************************ 00:05:06.145 07:40:50 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:06.146 07:40:50 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:06.146 00:05:06.146 real 0m14.012s 00:05:06.146 user 0m13.574s 00:05:06.146 sys 0m1.563s 00:05:06.146 07:40:50 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.146 07:40:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.146 ************************************ 00:05:06.146 END TEST skip_rpc 00:05:06.146 ************************************ 00:05:06.146 07:40:50 -- common/autotest_common.sh@1142 -- # return 0 00:05:06.146 07:40:50 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:06.146 07:40:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.146 07:40:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.146 07:40:50 -- common/autotest_common.sh@10 -- # set +x 00:05:06.146 ************************************ 00:05:06.146 START TEST rpc_client 00:05:06.146 ************************************ 00:05:06.146 07:40:50 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:06.146 * Looking for test storage... 00:05:06.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:06.146 07:40:50 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:06.146 OK 00:05:06.146 07:40:50 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:06.146 00:05:06.146 real 0m0.115s 00:05:06.146 user 0m0.049s 00:05:06.146 sys 0m0.074s 00:05:06.146 07:40:50 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.146 07:40:50 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:06.146 ************************************ 00:05:06.146 END TEST rpc_client 00:05:06.146 ************************************ 00:05:06.146 07:40:50 -- common/autotest_common.sh@1142 -- # return 0 00:05:06.146 07:40:50 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:06.146 07:40:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.146 07:40:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.146 07:40:50 -- common/autotest_common.sh@10 -- # set +x 00:05:06.406 ************************************ 00:05:06.406 START TEST json_config 00:05:06.406 ************************************ 00:05:06.406 07:40:50 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:06.406 07:40:50 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:06.406 07:40:50 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:06.406 07:40:50 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:06.406 07:40:50 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:06.406 07:40:50 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:06.406 07:40:50 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:06.406 07:40:50 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:06.406 07:40:50 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:06.406 07:40:50 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:06.406 07:40:50 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:06.406 07:40:50 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:06.406 07:40:50 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:06.406 07:40:50 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:06.406 07:40:50 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:06.406 07:40:50 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:06.406 07:40:50 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:06.406 07:40:51 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:06.406 07:40:51 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:06.406 07:40:51 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:06.406 07:40:51 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:06.406 07:40:51 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.406 07:40:51 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.406 07:40:51 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.406 07:40:51 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.406 07:40:51 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.406 07:40:51 json_config -- paths/export.sh@5 -- # export PATH 00:05:06.406 07:40:51 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.406 07:40:51 json_config -- nvmf/common.sh@47 -- # : 0 00:05:06.406 07:40:51 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:06.406 07:40:51 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:06.406 07:40:51 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:06.406 07:40:51 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:06.406 07:40:51 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:06.406 07:40:51 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:06.406 07:40:51 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:06.406 07:40:51 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:06.406 07:40:51 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:06.406 07:40:51 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:06.406 07:40:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:06.406 07:40:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:06.406 07:40:51 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:06.406 07:40:51 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:06.406 07:40:51 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:06.406 07:40:51 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:06.406 07:40:51 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:06.406 07:40:51 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:06.407 07:40:51 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:06.407 07:40:51 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:06.407 07:40:51 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:06.407 07:40:51 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:06.407 07:40:51 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:06.407 07:40:51 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:06.407 INFO: JSON configuration test init 00:05:06.407 07:40:51 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:06.407 07:40:51 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:06.407 07:40:51 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:06.407 07:40:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.407 07:40:51 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:06.407 07:40:51 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:06.407 07:40:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.407 07:40:51 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:06.407 07:40:51 json_config -- json_config/common.sh@9 -- # local app=target 00:05:06.407 07:40:51 json_config -- json_config/common.sh@10 -- # shift 00:05:06.407 07:40:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:06.407 07:40:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:06.407 07:40:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:06.407 07:40:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.407 07:40:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.407 07:40:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3071460 00:05:06.407 07:40:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:06.407 Waiting for target to run... 00:05:06.407 07:40:51 json_config -- json_config/common.sh@25 -- # waitforlisten 3071460 /var/tmp/spdk_tgt.sock 00:05:06.407 07:40:51 json_config -- common/autotest_common.sh@829 -- # '[' -z 3071460 ']' 00:05:06.407 07:40:51 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:06.407 07:40:51 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:06.407 07:40:51 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:06.407 07:40:51 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:06.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:06.407 07:40:51 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:06.407 07:40:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.407 [2024-07-15 07:40:51.076579] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:05:06.407 [2024-07-15 07:40:51.076633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071460 ] 00:05:06.407 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.666 [2024-07-15 07:40:51.359654] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.925 [2024-07-15 07:40:51.427577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.184 07:40:51 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.184 07:40:51 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:07.184 07:40:51 json_config -- json_config/common.sh@26 -- # echo '' 00:05:07.184 00:05:07.184 07:40:51 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:07.184 07:40:51 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:07.184 07:40:51 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:07.184 07:40:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.184 07:40:51 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:07.184 07:40:51 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:07.184 07:40:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:07.184 07:40:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.184 07:40:51 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:07.184 07:40:51 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:07.184 07:40:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:10.481 07:40:54 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:10.481 07:40:54 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:10.481 07:40:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:10.481 07:40:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.481 07:40:54 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:10.481 07:40:54 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:10.481 07:40:54 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:10.481 07:40:54 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:10.481 07:40:54 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:10.481 07:40:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:10.481 07:40:55 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:10.481 07:40:55 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:10.481 07:40:55 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:10.481 07:40:55 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:10.481 07:40:55 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:10.481 07:40:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.481 07:40:55 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:10.481 07:40:55 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:10.481 07:40:55 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:10.481 07:40:55 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:10.481 07:40:55 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:10.481 07:40:55 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:10.481 07:40:55 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:10.481 07:40:55 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:10.481 07:40:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.481 07:40:55 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:10.481 07:40:55 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:10.481 07:40:55 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:10.481 07:40:55 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:10.481 07:40:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:10.740 MallocForNvmf0 00:05:10.740 07:40:55 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:10.740 07:40:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:10.999 MallocForNvmf1 00:05:10.999 07:40:55 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:10.999 07:40:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:10.999 [2024-07-15 07:40:55.741017] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:11.258 07:40:55 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:11.258 07:40:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:11.258 07:40:55 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:11.258 07:40:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:11.517 07:40:56 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:11.517 07:40:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:11.777 07:40:56 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:11.777 07:40:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:11.777 [2024-07-15 07:40:56.471290] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:11.777 07:40:56 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:11.777 07:40:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:11.777 07:40:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.037 07:40:56 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:12.037 07:40:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:12.037 07:40:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.037 07:40:56 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:12.037 07:40:56 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:12.037 07:40:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:12.037 MallocBdevForConfigChangeCheck 00:05:12.037 07:40:56 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:12.037 07:40:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:12.037 07:40:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.037 07:40:56 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:12.037 07:40:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:12.605 07:40:57 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:12.605 INFO: shutting down applications... 00:05:12.605 07:40:57 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:12.605 07:40:57 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:12.605 07:40:57 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:12.605 07:40:57 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:13.983 Calling clear_iscsi_subsystem 00:05:13.983 Calling clear_nvmf_subsystem 00:05:13.983 Calling clear_nbd_subsystem 00:05:13.983 Calling clear_ublk_subsystem 00:05:13.983 Calling clear_vhost_blk_subsystem 00:05:13.983 Calling clear_vhost_scsi_subsystem 00:05:13.983 Calling clear_bdev_subsystem 00:05:13.983 07:40:58 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:13.983 07:40:58 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:13.983 07:40:58 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:13.983 07:40:58 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.983 07:40:58 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:13.983 07:40:58 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:14.552 07:40:59 json_config -- json_config/json_config.sh@345 -- # break 00:05:14.552 07:40:59 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:14.552 07:40:59 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:14.552 07:40:59 json_config -- json_config/common.sh@31 -- # local app=target 00:05:14.552 07:40:59 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:14.552 07:40:59 json_config -- json_config/common.sh@35 -- # [[ -n 3071460 ]] 00:05:14.552 07:40:59 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3071460 00:05:14.552 07:40:59 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:14.552 07:40:59 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.552 07:40:59 json_config -- json_config/common.sh@41 -- # kill -0 3071460 00:05:14.552 07:40:59 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:14.811 07:40:59 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:14.811 07:40:59 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.811 07:40:59 json_config -- json_config/common.sh@41 -- # kill -0 3071460 00:05:14.811 07:40:59 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:14.811 07:40:59 json_config -- json_config/common.sh@43 -- # break 00:05:14.811 07:40:59 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:14.811 07:40:59 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:14.811 SPDK target shutdown done 00:05:14.811 07:40:59 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:14.811 INFO: relaunching applications... 00:05:14.811 07:40:59 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.811 07:40:59 json_config -- json_config/common.sh@9 -- # local app=target 00:05:14.811 07:40:59 json_config -- json_config/common.sh@10 -- # shift 00:05:14.811 07:40:59 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:14.811 07:40:59 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:14.811 07:40:59 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:14.811 07:40:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.811 07:40:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.811 07:40:59 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3073118 00:05:14.811 07:40:59 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:14.811 Waiting for target to run... 00:05:14.811 07:40:59 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.811 07:40:59 json_config -- json_config/common.sh@25 -- # waitforlisten 3073118 /var/tmp/spdk_tgt.sock 00:05:14.811 07:40:59 json_config -- common/autotest_common.sh@829 -- # '[' -z 3073118 ']' 00:05:14.811 07:40:59 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:14.811 07:40:59 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.811 07:40:59 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:14.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:14.811 07:40:59 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.811 07:40:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.071 [2024-07-15 07:40:59.573841] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:05:15.071 [2024-07-15 07:40:59.573900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3073118 ] 00:05:15.071 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.329 [2024-07-15 07:41:00.020082] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.588 [2024-07-15 07:41:00.105288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.878 [2024-07-15 07:41:03.119098] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:18.878 [2024-07-15 07:41:03.151411] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:19.136 07:41:03 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.136 07:41:03 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:19.136 07:41:03 json_config -- json_config/common.sh@26 -- # echo '' 00:05:19.136 00:05:19.136 07:41:03 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:19.136 07:41:03 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:19.136 INFO: Checking if target configuration is the same... 00:05:19.136 07:41:03 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.136 07:41:03 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:19.136 07:41:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.136 + '[' 2 -ne 2 ']' 00:05:19.136 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:19.136 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:19.136 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:19.136 +++ basename /dev/fd/62 00:05:19.136 ++ mktemp /tmp/62.XXX 00:05:19.136 + tmp_file_1=/tmp/62.DUc 00:05:19.136 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.136 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:19.136 + tmp_file_2=/tmp/spdk_tgt_config.json.uaE 00:05:19.136 + ret=0 00:05:19.136 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:19.395 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:19.395 + diff -u /tmp/62.DUc /tmp/spdk_tgt_config.json.uaE 00:05:19.395 + echo 'INFO: JSON config files are the same' 00:05:19.395 INFO: JSON config files are the same 00:05:19.395 + rm /tmp/62.DUc /tmp/spdk_tgt_config.json.uaE 00:05:19.395 + exit 0 00:05:19.395 07:41:04 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:19.395 07:41:04 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:19.395 INFO: changing configuration and checking if this can be detected... 00:05:19.395 07:41:04 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:19.395 07:41:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:19.654 07:41:04 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.654 07:41:04 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:19.654 07:41:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.654 + '[' 2 -ne 2 ']' 00:05:19.654 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:19.654 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:19.654 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:19.654 +++ basename /dev/fd/62 00:05:19.654 ++ mktemp /tmp/62.XXX 00:05:19.654 + tmp_file_1=/tmp/62.iCl 00:05:19.654 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.654 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:19.654 + tmp_file_2=/tmp/spdk_tgt_config.json.v7K 00:05:19.654 + ret=0 00:05:19.654 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:19.912 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:19.912 + diff -u /tmp/62.iCl /tmp/spdk_tgt_config.json.v7K 00:05:19.912 + ret=1 00:05:19.912 + echo '=== Start of file: /tmp/62.iCl ===' 00:05:19.912 + cat /tmp/62.iCl 00:05:19.912 + echo '=== End of file: /tmp/62.iCl ===' 00:05:19.912 + echo '' 00:05:19.912 + echo '=== Start of file: /tmp/spdk_tgt_config.json.v7K ===' 00:05:19.912 + cat /tmp/spdk_tgt_config.json.v7K 00:05:19.912 + echo '=== End of file: /tmp/spdk_tgt_config.json.v7K ===' 00:05:19.912 + echo '' 00:05:19.912 + rm /tmp/62.iCl /tmp/spdk_tgt_config.json.v7K 00:05:19.912 + exit 1 00:05:20.171 07:41:04 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:20.171 INFO: configuration change detected. 00:05:20.171 07:41:04 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:20.171 07:41:04 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:20.171 07:41:04 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:20.171 07:41:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.171 07:41:04 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:20.171 07:41:04 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:20.171 07:41:04 json_config -- json_config/json_config.sh@317 -- # [[ -n 3073118 ]] 00:05:20.171 07:41:04 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:20.171 07:41:04 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:20.171 07:41:04 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:20.171 07:41:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.171 07:41:04 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:20.171 07:41:04 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:20.171 07:41:04 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:20.171 07:41:04 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:20.171 07:41:04 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:20.171 07:41:04 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:20.171 07:41:04 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:20.171 07:41:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.171 07:41:04 json_config -- json_config/json_config.sh@323 -- # killprocess 3073118 00:05:20.171 07:41:04 json_config -- common/autotest_common.sh@948 -- # '[' -z 3073118 ']' 00:05:20.171 07:41:04 json_config -- common/autotest_common.sh@952 -- # kill -0 3073118 00:05:20.171 07:41:04 json_config -- common/autotest_common.sh@953 -- # uname 00:05:20.171 07:41:04 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:20.171 07:41:04 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3073118 00:05:20.171 07:41:04 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:20.171 07:41:04 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:20.171 07:41:04 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3073118' 00:05:20.171 killing process with pid 3073118 00:05:20.171 07:41:04 json_config -- common/autotest_common.sh@967 -- # kill 3073118 00:05:20.171 07:41:04 json_config -- common/autotest_common.sh@972 -- # wait 3073118 00:05:21.552 07:41:06 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.552 07:41:06 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:21.552 07:41:06 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:21.552 07:41:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.552 07:41:06 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:21.552 07:41:06 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:21.552 INFO: Success 00:05:21.552 00:05:21.552 real 0m15.373s 00:05:21.552 user 0m16.300s 00:05:21.552 sys 0m1.920s 00:05:21.552 07:41:06 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.552 07:41:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.552 ************************************ 00:05:21.552 END TEST json_config 00:05:21.552 ************************************ 00:05:21.812 07:41:06 -- common/autotest_common.sh@1142 -- # return 0 00:05:21.812 07:41:06 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:21.812 07:41:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.812 07:41:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.812 07:41:06 -- common/autotest_common.sh@10 -- # set +x 00:05:21.812 ************************************ 00:05:21.812 START TEST json_config_extra_key 00:05:21.812 ************************************ 00:05:21.812 07:41:06 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:21.812 07:41:06 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:21.812 07:41:06 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:21.812 07:41:06 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:21.812 07:41:06 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:21.812 07:41:06 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.812 07:41:06 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.812 07:41:06 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.812 07:41:06 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:21.812 07:41:06 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:21.812 07:41:06 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:21.812 07:41:06 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:21.812 07:41:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:21.812 07:41:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:21.812 07:41:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:21.812 07:41:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:21.812 07:41:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:21.812 07:41:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:21.812 07:41:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:21.812 07:41:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:21.812 07:41:06 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:21.812 07:41:06 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:21.812 INFO: launching applications... 00:05:21.812 07:41:06 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:21.812 07:41:06 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:21.812 07:41:06 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:21.812 07:41:06 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:21.812 07:41:06 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:21.812 07:41:06 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:21.812 07:41:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.812 07:41:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.812 07:41:06 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3074388 00:05:21.812 07:41:06 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:21.812 Waiting for target to run... 00:05:21.812 07:41:06 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3074388 /var/tmp/spdk_tgt.sock 00:05:21.812 07:41:06 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 3074388 ']' 00:05:21.812 07:41:06 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:21.812 07:41:06 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:21.812 07:41:06 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.812 07:41:06 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:21.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:21.812 07:41:06 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.812 07:41:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:21.812 [2024-07-15 07:41:06.511091] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:05:21.812 [2024-07-15 07:41:06.511145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074388 ] 00:05:21.812 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.071 [2024-07-15 07:41:06.792269] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.330 [2024-07-15 07:41:06.860804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.588 07:41:07 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.588 07:41:07 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:22.588 07:41:07 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:22.588 00:05:22.588 07:41:07 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:22.588 INFO: shutting down applications... 00:05:22.588 07:41:07 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:22.588 07:41:07 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:22.588 07:41:07 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:22.588 07:41:07 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3074388 ]] 00:05:22.588 07:41:07 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3074388 00:05:22.588 07:41:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:22.588 07:41:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.588 07:41:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3074388 00:05:22.588 07:41:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:23.155 07:41:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:23.155 07:41:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.155 07:41:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3074388 00:05:23.155 07:41:07 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:23.155 07:41:07 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:23.155 07:41:07 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:23.155 07:41:07 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:23.155 SPDK target shutdown done 00:05:23.155 07:41:07 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:23.155 Success 00:05:23.155 00:05:23.155 real 0m1.448s 00:05:23.155 user 0m1.229s 00:05:23.155 sys 0m0.370s 00:05:23.155 07:41:07 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.155 07:41:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:23.155 ************************************ 00:05:23.155 END TEST json_config_extra_key 00:05:23.155 ************************************ 00:05:23.155 07:41:07 -- common/autotest_common.sh@1142 -- # return 0 00:05:23.155 07:41:07 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:23.155 07:41:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.155 07:41:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.155 07:41:07 -- common/autotest_common.sh@10 -- # set +x 00:05:23.155 ************************************ 00:05:23.155 START TEST alias_rpc 00:05:23.155 ************************************ 00:05:23.155 07:41:07 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:23.413 * Looking for test storage... 00:05:23.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:23.413 07:41:07 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:23.413 07:41:07 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3074671 00:05:23.413 07:41:07 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3074671 00:05:23.413 07:41:07 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.413 07:41:07 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 3074671 ']' 00:05:23.413 07:41:07 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.413 07:41:07 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.413 07:41:07 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.414 07:41:07 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.414 07:41:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.414 [2024-07-15 07:41:08.026280] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:05:23.414 [2024-07-15 07:41:08.026329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074671 ] 00:05:23.414 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.414 [2024-07-15 07:41:08.093014] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.672 [2024-07-15 07:41:08.176399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.240 07:41:08 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.240 07:41:08 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:24.240 07:41:08 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:24.498 07:41:09 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3074671 00:05:24.498 07:41:09 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 3074671 ']' 00:05:24.498 07:41:09 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 3074671 00:05:24.498 07:41:09 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:24.498 07:41:09 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.498 07:41:09 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3074671 00:05:24.498 07:41:09 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.498 07:41:09 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.498 07:41:09 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3074671' 00:05:24.498 killing process with pid 3074671 00:05:24.498 07:41:09 alias_rpc -- common/autotest_common.sh@967 -- # kill 3074671 00:05:24.498 07:41:09 alias_rpc -- common/autotest_common.sh@972 -- # wait 3074671 00:05:24.757 00:05:24.757 real 0m1.502s 00:05:24.757 user 0m1.647s 00:05:24.757 sys 0m0.409s 00:05:24.757 07:41:09 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.757 07:41:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.757 ************************************ 00:05:24.757 END TEST alias_rpc 00:05:24.757 ************************************ 00:05:24.757 07:41:09 -- common/autotest_common.sh@1142 -- # return 0 00:05:24.757 07:41:09 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:24.757 07:41:09 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:24.757 07:41:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.757 07:41:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.757 07:41:09 -- common/autotest_common.sh@10 -- # set +x 00:05:24.757 ************************************ 00:05:24.757 START TEST spdkcli_tcp 00:05:24.757 ************************************ 00:05:24.757 07:41:09 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:25.016 * Looking for test storage... 00:05:25.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:25.017 07:41:09 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:25.017 07:41:09 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:25.017 07:41:09 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:25.017 07:41:09 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:25.017 07:41:09 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:25.017 07:41:09 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:25.017 07:41:09 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:25.017 07:41:09 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:25.017 07:41:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.017 07:41:09 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3074960 00:05:25.017 07:41:09 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3074960 00:05:25.017 07:41:09 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:25.017 07:41:09 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 3074960 ']' 00:05:25.017 07:41:09 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.017 07:41:09 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.017 07:41:09 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.017 07:41:09 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.017 07:41:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.017 [2024-07-15 07:41:09.599138] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:05:25.017 [2024-07-15 07:41:09.599191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074960 ] 00:05:25.017 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.017 [2024-07-15 07:41:09.651189] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.017 [2024-07-15 07:41:09.725804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.017 [2024-07-15 07:41:09.725807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.984 07:41:10 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.984 07:41:10 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:25.985 07:41:10 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3075154 00:05:25.985 07:41:10 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:25.985 07:41:10 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:25.985 [ 00:05:25.985 "bdev_malloc_delete", 00:05:25.985 "bdev_malloc_create", 00:05:25.985 "bdev_null_resize", 00:05:25.985 "bdev_null_delete", 00:05:25.985 "bdev_null_create", 00:05:25.985 "bdev_nvme_cuse_unregister", 00:05:25.985 "bdev_nvme_cuse_register", 00:05:25.985 "bdev_opal_new_user", 00:05:25.985 "bdev_opal_set_lock_state", 00:05:25.985 "bdev_opal_delete", 00:05:25.985 "bdev_opal_get_info", 00:05:25.985 "bdev_opal_create", 00:05:25.985 "bdev_nvme_opal_revert", 00:05:25.985 "bdev_nvme_opal_init", 00:05:25.985 "bdev_nvme_send_cmd", 00:05:25.985 "bdev_nvme_get_path_iostat", 00:05:25.985 "bdev_nvme_get_mdns_discovery_info", 00:05:25.985 "bdev_nvme_stop_mdns_discovery", 00:05:25.985 "bdev_nvme_start_mdns_discovery", 00:05:25.985 "bdev_nvme_set_multipath_policy", 00:05:25.985 "bdev_nvme_set_preferred_path", 00:05:25.985 "bdev_nvme_get_io_paths", 00:05:25.985 "bdev_nvme_remove_error_injection", 00:05:25.985 "bdev_nvme_add_error_injection", 00:05:25.985 "bdev_nvme_get_discovery_info", 00:05:25.985 "bdev_nvme_stop_discovery", 00:05:25.985 "bdev_nvme_start_discovery", 00:05:25.985 "bdev_nvme_get_controller_health_info", 00:05:25.985 "bdev_nvme_disable_controller", 00:05:25.985 "bdev_nvme_enable_controller", 00:05:25.985 "bdev_nvme_reset_controller", 00:05:25.985 "bdev_nvme_get_transport_statistics", 00:05:25.985 "bdev_nvme_apply_firmware", 00:05:25.985 "bdev_nvme_detach_controller", 00:05:25.985 "bdev_nvme_get_controllers", 00:05:25.985 "bdev_nvme_attach_controller", 00:05:25.985 "bdev_nvme_set_hotplug", 00:05:25.985 "bdev_nvme_set_options", 00:05:25.985 "bdev_passthru_delete", 00:05:25.985 "bdev_passthru_create", 00:05:25.985 "bdev_lvol_set_parent_bdev", 00:05:25.985 "bdev_lvol_set_parent", 00:05:25.985 "bdev_lvol_check_shallow_copy", 00:05:25.985 "bdev_lvol_start_shallow_copy", 00:05:25.985 "bdev_lvol_grow_lvstore", 00:05:25.985 "bdev_lvol_get_lvols", 00:05:25.985 "bdev_lvol_get_lvstores", 00:05:25.985 "bdev_lvol_delete", 00:05:25.985 "bdev_lvol_set_read_only", 00:05:25.985 "bdev_lvol_resize", 00:05:25.985 "bdev_lvol_decouple_parent", 00:05:25.985 "bdev_lvol_inflate", 00:05:25.985 "bdev_lvol_rename", 00:05:25.985 "bdev_lvol_clone_bdev", 00:05:25.985 "bdev_lvol_clone", 00:05:25.985 "bdev_lvol_snapshot", 00:05:25.985 "bdev_lvol_create", 00:05:25.985 "bdev_lvol_delete_lvstore", 00:05:25.985 "bdev_lvol_rename_lvstore", 00:05:25.985 "bdev_lvol_create_lvstore", 00:05:25.985 "bdev_raid_set_options", 00:05:25.985 "bdev_raid_remove_base_bdev", 00:05:25.985 "bdev_raid_add_base_bdev", 00:05:25.985 "bdev_raid_delete", 00:05:25.985 "bdev_raid_create", 00:05:25.985 "bdev_raid_get_bdevs", 00:05:25.985 "bdev_error_inject_error", 00:05:25.985 "bdev_error_delete", 00:05:25.985 "bdev_error_create", 00:05:25.985 "bdev_split_delete", 00:05:25.985 "bdev_split_create", 00:05:25.985 "bdev_delay_delete", 00:05:25.985 "bdev_delay_create", 00:05:25.985 "bdev_delay_update_latency", 00:05:25.985 "bdev_zone_block_delete", 00:05:25.985 "bdev_zone_block_create", 00:05:25.985 "blobfs_create", 00:05:25.985 "blobfs_detect", 00:05:25.985 "blobfs_set_cache_size", 00:05:25.985 "bdev_aio_delete", 00:05:25.985 "bdev_aio_rescan", 00:05:25.985 "bdev_aio_create", 00:05:25.985 "bdev_ftl_set_property", 00:05:25.985 "bdev_ftl_get_properties", 00:05:25.985 "bdev_ftl_get_stats", 00:05:25.985 "bdev_ftl_unmap", 00:05:25.985 "bdev_ftl_unload", 00:05:25.985 "bdev_ftl_delete", 00:05:25.985 "bdev_ftl_load", 00:05:25.985 "bdev_ftl_create", 00:05:25.985 "bdev_virtio_attach_controller", 00:05:25.985 "bdev_virtio_scsi_get_devices", 00:05:25.985 "bdev_virtio_detach_controller", 00:05:25.985 "bdev_virtio_blk_set_hotplug", 00:05:25.985 "bdev_iscsi_delete", 00:05:25.985 "bdev_iscsi_create", 00:05:25.985 "bdev_iscsi_set_options", 00:05:25.985 "accel_error_inject_error", 00:05:25.985 "ioat_scan_accel_module", 00:05:25.985 "dsa_scan_accel_module", 00:05:25.985 "iaa_scan_accel_module", 00:05:25.985 "vfu_virtio_create_scsi_endpoint", 00:05:25.985 "vfu_virtio_scsi_remove_target", 00:05:25.985 "vfu_virtio_scsi_add_target", 00:05:25.985 "vfu_virtio_create_blk_endpoint", 00:05:25.985 "vfu_virtio_delete_endpoint", 00:05:25.985 "keyring_file_remove_key", 00:05:25.985 "keyring_file_add_key", 00:05:25.985 "keyring_linux_set_options", 00:05:25.985 "iscsi_get_histogram", 00:05:25.985 "iscsi_enable_histogram", 00:05:25.985 "iscsi_set_options", 00:05:25.985 "iscsi_get_auth_groups", 00:05:25.985 "iscsi_auth_group_remove_secret", 00:05:25.985 "iscsi_auth_group_add_secret", 00:05:25.985 "iscsi_delete_auth_group", 00:05:25.985 "iscsi_create_auth_group", 00:05:25.985 "iscsi_set_discovery_auth", 00:05:25.985 "iscsi_get_options", 00:05:25.985 "iscsi_target_node_request_logout", 00:05:25.985 "iscsi_target_node_set_redirect", 00:05:25.985 "iscsi_target_node_set_auth", 00:05:25.985 "iscsi_target_node_add_lun", 00:05:25.985 "iscsi_get_stats", 00:05:25.985 "iscsi_get_connections", 00:05:25.985 "iscsi_portal_group_set_auth", 00:05:25.985 "iscsi_start_portal_group", 00:05:25.985 "iscsi_delete_portal_group", 00:05:25.985 "iscsi_create_portal_group", 00:05:25.985 "iscsi_get_portal_groups", 00:05:25.985 "iscsi_delete_target_node", 00:05:25.985 "iscsi_target_node_remove_pg_ig_maps", 00:05:25.985 "iscsi_target_node_add_pg_ig_maps", 00:05:25.985 "iscsi_create_target_node", 00:05:25.985 "iscsi_get_target_nodes", 00:05:25.985 "iscsi_delete_initiator_group", 00:05:25.986 "iscsi_initiator_group_remove_initiators", 00:05:25.986 "iscsi_initiator_group_add_initiators", 00:05:25.986 "iscsi_create_initiator_group", 00:05:25.986 "iscsi_get_initiator_groups", 00:05:25.986 "nvmf_set_crdt", 00:05:25.986 "nvmf_set_config", 00:05:25.986 "nvmf_set_max_subsystems", 00:05:25.986 "nvmf_stop_mdns_prr", 00:05:25.986 "nvmf_publish_mdns_prr", 00:05:25.986 "nvmf_subsystem_get_listeners", 00:05:25.986 "nvmf_subsystem_get_qpairs", 00:05:25.986 "nvmf_subsystem_get_controllers", 00:05:25.986 "nvmf_get_stats", 00:05:25.986 "nvmf_get_transports", 00:05:25.986 "nvmf_create_transport", 00:05:25.986 "nvmf_get_targets", 00:05:25.986 "nvmf_delete_target", 00:05:25.986 "nvmf_create_target", 00:05:25.986 "nvmf_subsystem_allow_any_host", 00:05:25.986 "nvmf_subsystem_remove_host", 00:05:25.986 "nvmf_subsystem_add_host", 00:05:25.986 "nvmf_ns_remove_host", 00:05:25.986 "nvmf_ns_add_host", 00:05:25.986 "nvmf_subsystem_remove_ns", 00:05:25.986 "nvmf_subsystem_add_ns", 00:05:25.986 "nvmf_subsystem_listener_set_ana_state", 00:05:25.986 "nvmf_discovery_get_referrals", 00:05:25.986 "nvmf_discovery_remove_referral", 00:05:25.986 "nvmf_discovery_add_referral", 00:05:25.986 "nvmf_subsystem_remove_listener", 00:05:25.986 "nvmf_subsystem_add_listener", 00:05:25.986 "nvmf_delete_subsystem", 00:05:25.986 "nvmf_create_subsystem", 00:05:25.986 "nvmf_get_subsystems", 00:05:25.986 "env_dpdk_get_mem_stats", 00:05:25.986 "nbd_get_disks", 00:05:25.986 "nbd_stop_disk", 00:05:25.986 "nbd_start_disk", 00:05:25.986 "ublk_recover_disk", 00:05:25.986 "ublk_get_disks", 00:05:25.986 "ublk_stop_disk", 00:05:25.986 "ublk_start_disk", 00:05:25.986 "ublk_destroy_target", 00:05:25.986 "ublk_create_target", 00:05:25.986 "virtio_blk_create_transport", 00:05:25.986 "virtio_blk_get_transports", 00:05:25.986 "vhost_controller_set_coalescing", 00:05:25.986 "vhost_get_controllers", 00:05:25.986 "vhost_delete_controller", 00:05:25.986 "vhost_create_blk_controller", 00:05:25.986 "vhost_scsi_controller_remove_target", 00:05:25.986 "vhost_scsi_controller_add_target", 00:05:25.986 "vhost_start_scsi_controller", 00:05:25.986 "vhost_create_scsi_controller", 00:05:25.986 "thread_set_cpumask", 00:05:25.986 "framework_get_governor", 00:05:25.986 "framework_get_scheduler", 00:05:25.986 "framework_set_scheduler", 00:05:25.986 "framework_get_reactors", 00:05:25.986 "thread_get_io_channels", 00:05:25.986 "thread_get_pollers", 00:05:25.986 "thread_get_stats", 00:05:25.986 "framework_monitor_context_switch", 00:05:25.986 "spdk_kill_instance", 00:05:25.986 "log_enable_timestamps", 00:05:25.986 "log_get_flags", 00:05:25.986 "log_clear_flag", 00:05:25.986 "log_set_flag", 00:05:25.986 "log_get_level", 00:05:25.986 "log_set_level", 00:05:25.986 "log_get_print_level", 00:05:25.986 "log_set_print_level", 00:05:25.986 "framework_enable_cpumask_locks", 00:05:25.986 "framework_disable_cpumask_locks", 00:05:25.986 "framework_wait_init", 00:05:25.986 "framework_start_init", 00:05:25.986 "scsi_get_devices", 00:05:25.986 "bdev_get_histogram", 00:05:25.986 "bdev_enable_histogram", 00:05:25.986 "bdev_set_qos_limit", 00:05:25.986 "bdev_set_qd_sampling_period", 00:05:25.986 "bdev_get_bdevs", 00:05:25.986 "bdev_reset_iostat", 00:05:25.986 "bdev_get_iostat", 00:05:25.986 "bdev_examine", 00:05:25.986 "bdev_wait_for_examine", 00:05:25.986 "bdev_set_options", 00:05:25.986 "notify_get_notifications", 00:05:25.986 "notify_get_types", 00:05:25.986 "accel_get_stats", 00:05:25.986 "accel_set_options", 00:05:25.986 "accel_set_driver", 00:05:25.986 "accel_crypto_key_destroy", 00:05:25.986 "accel_crypto_keys_get", 00:05:25.986 "accel_crypto_key_create", 00:05:25.986 "accel_assign_opc", 00:05:25.986 "accel_get_module_info", 00:05:25.986 "accel_get_opc_assignments", 00:05:25.986 "vmd_rescan", 00:05:25.986 "vmd_remove_device", 00:05:25.986 "vmd_enable", 00:05:25.986 "sock_get_default_impl", 00:05:25.986 "sock_set_default_impl", 00:05:25.986 "sock_impl_set_options", 00:05:25.986 "sock_impl_get_options", 00:05:25.986 "iobuf_get_stats", 00:05:25.986 "iobuf_set_options", 00:05:25.986 "keyring_get_keys", 00:05:25.986 "framework_get_pci_devices", 00:05:25.986 "framework_get_config", 00:05:25.986 "framework_get_subsystems", 00:05:25.986 "vfu_tgt_set_base_path", 00:05:25.986 "trace_get_info", 00:05:25.986 "trace_get_tpoint_group_mask", 00:05:25.986 "trace_disable_tpoint_group", 00:05:25.986 "trace_enable_tpoint_group", 00:05:25.986 "trace_clear_tpoint_mask", 00:05:25.986 "trace_set_tpoint_mask", 00:05:25.986 "spdk_get_version", 00:05:25.986 "rpc_get_methods" 00:05:25.986 ] 00:05:25.986 07:41:10 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:25.986 07:41:10 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:25.986 07:41:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.986 07:41:10 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:25.986 07:41:10 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3074960 00:05:25.986 07:41:10 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 3074960 ']' 00:05:25.986 07:41:10 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 3074960 00:05:25.986 07:41:10 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:25.986 07:41:10 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:25.986 07:41:10 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3074960 00:05:25.986 07:41:10 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:25.986 07:41:10 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:25.986 07:41:10 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3074960' 00:05:25.986 killing process with pid 3074960 00:05:25.987 07:41:10 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 3074960 00:05:25.987 07:41:10 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 3074960 00:05:26.251 00:05:26.251 real 0m1.544s 00:05:26.251 user 0m2.895s 00:05:26.251 sys 0m0.440s 00:05:26.251 07:41:10 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.251 07:41:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.251 ************************************ 00:05:26.251 END TEST spdkcli_tcp 00:05:26.251 ************************************ 00:05:26.510 07:41:11 -- common/autotest_common.sh@1142 -- # return 0 00:05:26.510 07:41:11 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:26.510 07:41:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.510 07:41:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.510 07:41:11 -- common/autotest_common.sh@10 -- # set +x 00:05:26.510 ************************************ 00:05:26.510 START TEST dpdk_mem_utility 00:05:26.510 ************************************ 00:05:26.510 07:41:11 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:26.510 * Looking for test storage... 00:05:26.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:26.510 07:41:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:26.510 07:41:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3075267 00:05:26.510 07:41:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3075267 00:05:26.510 07:41:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.510 07:41:11 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 3075267 ']' 00:05:26.510 07:41:11 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.510 07:41:11 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.510 07:41:11 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.510 07:41:11 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.510 07:41:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:26.510 [2024-07-15 07:41:11.201641] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:05:26.510 [2024-07-15 07:41:11.201688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3075267 ] 00:05:26.510 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.769 [2024-07-15 07:41:11.270019] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.769 [2024-07-15 07:41:11.348981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.336 07:41:12 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.336 07:41:12 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:27.336 07:41:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:27.336 07:41:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:27.336 07:41:12 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.336 07:41:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:27.336 { 00:05:27.336 "filename": "/tmp/spdk_mem_dump.txt" 00:05:27.336 } 00:05:27.336 07:41:12 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.336 07:41:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:27.336 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:27.336 1 heaps totaling size 814.000000 MiB 00:05:27.336 size: 814.000000 MiB heap id: 0 00:05:27.336 end heaps---------- 00:05:27.336 8 mempools totaling size 598.116089 MiB 00:05:27.336 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:27.336 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:27.336 size: 84.521057 MiB name: bdev_io_3075267 00:05:27.336 size: 51.011292 MiB name: evtpool_3075267 00:05:27.336 size: 50.003479 MiB name: msgpool_3075267 00:05:27.336 size: 21.763794 MiB name: PDU_Pool 00:05:27.336 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:27.336 size: 0.026123 MiB name: Session_Pool 00:05:27.336 end mempools------- 00:05:27.336 6 memzones totaling size 4.142822 MiB 00:05:27.336 size: 1.000366 MiB name: RG_ring_0_3075267 00:05:27.336 size: 1.000366 MiB name: RG_ring_1_3075267 00:05:27.336 size: 1.000366 MiB name: RG_ring_4_3075267 00:05:27.336 size: 1.000366 MiB name: RG_ring_5_3075267 00:05:27.336 size: 0.125366 MiB name: RG_ring_2_3075267 00:05:27.336 size: 0.015991 MiB name: RG_ring_3_3075267 00:05:27.336 end memzones------- 00:05:27.336 07:41:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:27.595 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:27.595 list of free elements. size: 12.519348 MiB 00:05:27.595 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:27.595 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:27.595 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:27.595 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:27.595 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:27.595 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:27.595 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:27.595 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:27.595 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:27.595 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:27.595 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:27.595 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:27.595 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:27.595 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:27.595 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:27.595 list of standard malloc elements. size: 199.218079 MiB 00:05:27.595 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:27.595 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:27.595 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:27.595 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:27.595 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:27.595 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:27.595 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:27.595 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:27.595 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:27.595 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:27.595 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:27.595 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:27.595 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:27.595 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:27.595 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:27.595 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:27.595 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:27.595 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:27.595 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:27.595 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:27.595 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:27.595 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:27.595 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:27.595 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:27.595 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:27.595 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:27.595 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:27.595 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:27.595 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:27.595 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:27.595 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:27.595 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:27.595 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:27.595 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:27.595 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:27.595 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:27.595 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:27.595 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:27.595 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:27.595 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:27.595 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:27.595 list of memzone associated elements. size: 602.262573 MiB 00:05:27.595 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:27.595 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:27.595 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:27.595 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:27.595 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:27.595 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3075267_0 00:05:27.595 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:27.595 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3075267_0 00:05:27.596 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:27.596 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3075267_0 00:05:27.596 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:27.596 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:27.596 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:27.596 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:27.596 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:27.596 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3075267 00:05:27.596 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:27.596 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3075267 00:05:27.596 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:27.596 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3075267 00:05:27.596 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:27.596 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:27.596 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:27.596 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:27.596 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:27.596 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:27.596 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:27.596 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:27.596 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:27.596 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3075267 00:05:27.596 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:27.596 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3075267 00:05:27.596 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:27.596 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3075267 00:05:27.596 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:27.596 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3075267 00:05:27.596 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:27.596 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3075267 00:05:27.596 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:27.596 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:27.596 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:27.596 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:27.596 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:27.596 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:27.596 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:27.596 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3075267 00:05:27.596 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:27.596 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:27.596 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:27.596 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:27.596 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:27.596 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3075267 00:05:27.596 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:27.596 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:27.596 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:27.596 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3075267 00:05:27.596 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:27.596 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3075267 00:05:27.596 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:27.596 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:27.596 07:41:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:27.596 07:41:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3075267 00:05:27.596 07:41:12 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 3075267 ']' 00:05:27.596 07:41:12 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 3075267 00:05:27.596 07:41:12 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:27.596 07:41:12 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.596 07:41:12 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3075267 00:05:27.596 07:41:12 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.596 07:41:12 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.596 07:41:12 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3075267' 00:05:27.596 killing process with pid 3075267 00:05:27.596 07:41:12 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 3075267 00:05:27.596 07:41:12 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 3075267 00:05:27.856 00:05:27.856 real 0m1.411s 00:05:27.856 user 0m1.460s 00:05:27.856 sys 0m0.432s 00:05:27.856 07:41:12 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.856 07:41:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:27.856 ************************************ 00:05:27.856 END TEST dpdk_mem_utility 00:05:27.856 ************************************ 00:05:27.856 07:41:12 -- common/autotest_common.sh@1142 -- # return 0 00:05:27.856 07:41:12 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:27.856 07:41:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.856 07:41:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.856 07:41:12 -- common/autotest_common.sh@10 -- # set +x 00:05:27.856 ************************************ 00:05:27.856 START TEST event 00:05:27.856 ************************************ 00:05:27.856 07:41:12 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:28.115 * Looking for test storage... 00:05:28.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:28.115 07:41:12 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:28.115 07:41:12 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:28.115 07:41:12 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:28.115 07:41:12 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:28.115 07:41:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.115 07:41:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.115 ************************************ 00:05:28.115 START TEST event_perf 00:05:28.115 ************************************ 00:05:28.115 07:41:12 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:28.115 Running I/O for 1 seconds...[2024-07-15 07:41:12.690304] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:05:28.115 [2024-07-15 07:41:12.690360] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3075554 ] 00:05:28.115 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.115 [2024-07-15 07:41:12.749091] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:28.115 [2024-07-15 07:41:12.825822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.115 [2024-07-15 07:41:12.825930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.115 [2024-07-15 07:41:12.826039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.115 [2024-07-15 07:41:12.826040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.145 Running I/O for 1 seconds... 00:05:29.145 lcore 0: 211017 00:05:29.145 lcore 1: 211017 00:05:29.145 lcore 2: 211018 00:05:29.145 lcore 3: 211018 00:05:29.145 done. 00:05:29.145 00:05:29.145 real 0m1.226s 00:05:29.145 user 0m4.149s 00:05:29.145 sys 0m0.070s 00:05:29.145 07:41:13 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.145 07:41:13 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:29.145 ************************************ 00:05:29.145 END TEST event_perf 00:05:29.145 ************************************ 00:05:29.403 07:41:13 event -- common/autotest_common.sh@1142 -- # return 0 00:05:29.403 07:41:13 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:29.403 07:41:13 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:29.403 07:41:13 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.403 07:41:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.403 ************************************ 00:05:29.403 START TEST event_reactor 00:05:29.403 ************************************ 00:05:29.403 07:41:13 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:29.403 [2024-07-15 07:41:13.985781] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:05:29.403 [2024-07-15 07:41:13.985847] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3075805 ] 00:05:29.404 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.404 [2024-07-15 07:41:14.060676] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.404 [2024-07-15 07:41:14.132476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.779 test_start 00:05:30.779 oneshot 00:05:30.779 tick 100 00:05:30.779 tick 100 00:05:30.779 tick 250 00:05:30.779 tick 100 00:05:30.779 tick 100 00:05:30.779 tick 100 00:05:30.779 tick 250 00:05:30.779 tick 500 00:05:30.779 tick 100 00:05:30.779 tick 100 00:05:30.779 tick 250 00:05:30.779 tick 100 00:05:30.779 tick 100 00:05:30.779 test_end 00:05:30.779 00:05:30.779 real 0m1.235s 00:05:30.779 user 0m1.144s 00:05:30.779 sys 0m0.086s 00:05:30.779 07:41:15 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.779 07:41:15 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:30.779 ************************************ 00:05:30.779 END TEST event_reactor 00:05:30.779 ************************************ 00:05:30.779 07:41:15 event -- common/autotest_common.sh@1142 -- # return 0 00:05:30.779 07:41:15 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:30.779 07:41:15 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:30.779 07:41:15 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.779 07:41:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.779 ************************************ 00:05:30.779 START TEST event_reactor_perf 00:05:30.779 ************************************ 00:05:30.779 07:41:15 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:30.779 [2024-07-15 07:41:15.288598] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:05:30.779 [2024-07-15 07:41:15.288659] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3076061 ] 00:05:30.779 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.779 [2024-07-15 07:41:15.358327] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.779 [2024-07-15 07:41:15.428521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.158 test_start 00:05:32.158 test_end 00:05:32.158 Performance: 511973 events per second 00:05:32.158 00:05:32.158 real 0m1.226s 00:05:32.158 user 0m1.142s 00:05:32.158 sys 0m0.080s 00:05:32.158 07:41:16 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.158 07:41:16 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:32.158 ************************************ 00:05:32.158 END TEST event_reactor_perf 00:05:32.158 ************************************ 00:05:32.158 07:41:16 event -- common/autotest_common.sh@1142 -- # return 0 00:05:32.158 07:41:16 event -- event/event.sh@49 -- # uname -s 00:05:32.158 07:41:16 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:32.158 07:41:16 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:32.158 07:41:16 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.158 07:41:16 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.158 07:41:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.158 ************************************ 00:05:32.158 START TEST event_scheduler 00:05:32.158 ************************************ 00:05:32.158 07:41:16 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:32.158 * Looking for test storage... 00:05:32.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:32.158 07:41:16 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:32.158 07:41:16 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3076336 00:05:32.158 07:41:16 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.158 07:41:16 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:32.158 07:41:16 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3076336 00:05:32.158 07:41:16 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 3076336 ']' 00:05:32.158 07:41:16 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.158 07:41:16 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.158 07:41:16 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.158 07:41:16 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.158 07:41:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.158 [2024-07-15 07:41:16.694610] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:05:32.158 [2024-07-15 07:41:16.694655] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3076336 ] 00:05:32.158 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.158 [2024-07-15 07:41:16.761744] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:32.158 [2024-07-15 07:41:16.836752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.158 [2024-07-15 07:41:16.836866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.158 [2024-07-15 07:41:16.836971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.158 [2024-07-15 07:41:16.836972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.096 07:41:17 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.096 07:41:17 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:33.096 07:41:17 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:33.096 07:41:17 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.096 07:41:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.096 [2024-07-15 07:41:17.511398] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:33.096 [2024-07-15 07:41:17.511417] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:33.096 [2024-07-15 07:41:17.511426] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:33.096 [2024-07-15 07:41:17.511431] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:33.096 [2024-07-15 07:41:17.511437] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:33.096 07:41:17 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.096 07:41:17 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:33.096 07:41:17 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.096 07:41:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.096 [2024-07-15 07:41:17.583145] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:33.096 07:41:17 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.096 07:41:17 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:33.096 07:41:17 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.096 07:41:17 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.096 07:41:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.096 ************************************ 00:05:33.096 START TEST scheduler_create_thread 00:05:33.096 ************************************ 00:05:33.096 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:33.096 07:41:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:33.096 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.096 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.096 2 00:05:33.096 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.096 07:41:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:33.096 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.096 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.096 3 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.097 4 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.097 5 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.097 6 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.097 7 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.097 8 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.097 9 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.097 10 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.097 07:41:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.473 07:41:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.473 07:41:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:34.474 07:41:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:34.474 07:41:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.474 07:41:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.852 07:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.852 00:05:35.852 real 0m2.620s 00:05:35.852 user 0m0.025s 00:05:35.852 sys 0m0.003s 00:05:35.852 07:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.852 07:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.852 ************************************ 00:05:35.852 END TEST scheduler_create_thread 00:05:35.852 ************************************ 00:05:35.852 07:41:20 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:35.852 07:41:20 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:35.852 07:41:20 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3076336 00:05:35.852 07:41:20 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 3076336 ']' 00:05:35.852 07:41:20 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 3076336 00:05:35.852 07:41:20 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:35.852 07:41:20 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.852 07:41:20 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3076336 00:05:35.852 07:41:20 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:35.852 07:41:20 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:35.852 07:41:20 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3076336' 00:05:35.852 killing process with pid 3076336 00:05:35.852 07:41:20 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 3076336 00:05:35.852 07:41:20 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 3076336 00:05:36.111 [2024-07-15 07:41:20.717375] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:36.371 00:05:36.371 real 0m4.347s 00:05:36.371 user 0m8.191s 00:05:36.371 sys 0m0.395s 00:05:36.371 07:41:20 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.371 07:41:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.371 ************************************ 00:05:36.371 END TEST event_scheduler 00:05:36.371 ************************************ 00:05:36.371 07:41:20 event -- common/autotest_common.sh@1142 -- # return 0 00:05:36.371 07:41:20 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:36.371 07:41:20 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:36.371 07:41:20 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.371 07:41:20 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.371 07:41:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.371 ************************************ 00:05:36.371 START TEST app_repeat 00:05:36.371 ************************************ 00:05:36.371 07:41:20 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:36.371 07:41:20 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.371 07:41:20 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.371 07:41:20 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:36.371 07:41:20 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.371 07:41:20 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:36.371 07:41:20 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:36.371 07:41:20 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:36.371 07:41:20 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3077076 00:05:36.371 07:41:20 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.371 07:41:20 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:36.371 07:41:20 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3077076' 00:05:36.371 Process app_repeat pid: 3077076 00:05:36.371 07:41:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:36.371 07:41:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:36.371 spdk_app_start Round 0 00:05:36.371 07:41:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3077076 /var/tmp/spdk-nbd.sock 00:05:36.371 07:41:20 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3077076 ']' 00:05:36.371 07:41:20 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.371 07:41:20 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.371 07:41:20 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.371 07:41:20 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.371 07:41:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:36.371 [2024-07-15 07:41:21.011844] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:05:36.371 [2024-07-15 07:41:21.011896] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3077076 ] 00:05:36.371 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.371 [2024-07-15 07:41:21.081306] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.630 [2024-07-15 07:41:21.158670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.630 [2024-07-15 07:41:21.158671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.197 07:41:21 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.197 07:41:21 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:37.197 07:41:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.455 Malloc0 00:05:37.455 07:41:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.455 Malloc1 00:05:37.714 07:41:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.714 07:41:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.714 07:41:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.714 07:41:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:37.714 07:41:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.714 07:41:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:37.714 07:41:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.714 07:41:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.714 07:41:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.714 07:41:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:37.714 07:41:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.714 07:41:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:37.714 07:41:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:37.714 07:41:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:37.714 07:41:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.714 07:41:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:37.714 /dev/nbd0 00:05:37.714 07:41:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:37.714 07:41:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:37.714 07:41:22 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:37.714 07:41:22 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:37.714 07:41:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:37.714 07:41:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:37.714 07:41:22 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:37.714 07:41:22 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:37.714 07:41:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:37.714 07:41:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:37.714 07:41:22 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.714 1+0 records in 00:05:37.714 1+0 records out 00:05:37.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195041 s, 21.0 MB/s 00:05:37.714 07:41:22 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.714 07:41:22 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:37.714 07:41:22 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.714 07:41:22 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:37.714 07:41:22 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:37.714 07:41:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.714 07:41:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.714 07:41:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:37.972 /dev/nbd1 00:05:37.972 07:41:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:37.972 07:41:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:37.972 07:41:22 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:37.972 07:41:22 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:37.972 07:41:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:37.972 07:41:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:37.972 07:41:22 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:37.972 07:41:22 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:37.972 07:41:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:37.972 07:41:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:37.972 07:41:22 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.972 1+0 records in 00:05:37.972 1+0 records out 00:05:37.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187827 s, 21.8 MB/s 00:05:37.972 07:41:22 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.972 07:41:22 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:37.972 07:41:22 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.972 07:41:22 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:37.972 07:41:22 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:37.972 07:41:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.972 07:41:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.972 07:41:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.972 07:41:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.972 07:41:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:38.231 { 00:05:38.231 "nbd_device": "/dev/nbd0", 00:05:38.231 "bdev_name": "Malloc0" 00:05:38.231 }, 00:05:38.231 { 00:05:38.231 "nbd_device": "/dev/nbd1", 00:05:38.231 "bdev_name": "Malloc1" 00:05:38.231 } 00:05:38.231 ]' 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:38.231 { 00:05:38.231 "nbd_device": "/dev/nbd0", 00:05:38.231 "bdev_name": "Malloc0" 00:05:38.231 }, 00:05:38.231 { 00:05:38.231 "nbd_device": "/dev/nbd1", 00:05:38.231 "bdev_name": "Malloc1" 00:05:38.231 } 00:05:38.231 ]' 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:38.231 /dev/nbd1' 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:38.231 /dev/nbd1' 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:38.231 256+0 records in 00:05:38.231 256+0 records out 00:05:38.231 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103531 s, 101 MB/s 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:38.231 256+0 records in 00:05:38.231 256+0 records out 00:05:38.231 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139731 s, 75.0 MB/s 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:38.231 256+0 records in 00:05:38.231 256+0 records out 00:05:38.231 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150751 s, 69.6 MB/s 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.231 07:41:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:38.490 07:41:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:38.490 07:41:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:38.490 07:41:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:38.490 07:41:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.490 07:41:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.490 07:41:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:38.490 07:41:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.490 07:41:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.490 07:41:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.490 07:41:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:38.748 07:41:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:38.748 07:41:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:38.748 07:41:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:38.748 07:41:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.748 07:41:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.748 07:41:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:38.748 07:41:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.748 07:41:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.748 07:41:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.748 07:41:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.748 07:41:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.006 07:41:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.006 07:41:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.006 07:41:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.006 07:41:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.006 07:41:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.006 07:41:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.006 07:41:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:39.006 07:41:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.006 07:41:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.006 07:41:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.007 07:41:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.007 07:41:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.007 07:41:23 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:39.265 07:41:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:39.265 [2024-07-15 07:41:23.946569] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.265 [2024-07-15 07:41:24.014116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.265 [2024-07-15 07:41:24.014117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.525 [2024-07-15 07:41:24.054955] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:39.525 [2024-07-15 07:41:24.054996] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:42.059 07:41:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:42.059 07:41:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:42.059 spdk_app_start Round 1 00:05:42.059 07:41:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3077076 /var/tmp/spdk-nbd.sock 00:05:42.059 07:41:26 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3077076 ']' 00:05:42.059 07:41:26 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:42.059 07:41:26 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.059 07:41:26 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:42.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:42.059 07:41:26 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.059 07:41:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.318 07:41:26 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.318 07:41:26 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:42.318 07:41:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.577 Malloc0 00:05:42.577 07:41:27 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.577 Malloc1 00:05:42.836 07:41:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.836 07:41:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.836 07:41:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.836 07:41:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:42.836 07:41:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.836 07:41:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:42.836 07:41:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.836 07:41:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.836 07:41:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.836 07:41:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:42.836 07:41:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.836 07:41:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:42.836 07:41:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:42.836 07:41:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:42.836 07:41:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.836 07:41:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:42.836 /dev/nbd0 00:05:42.836 07:41:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:42.836 07:41:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:42.836 07:41:27 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:42.836 07:41:27 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:42.836 07:41:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:42.836 07:41:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:42.836 07:41:27 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:42.836 07:41:27 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:42.836 07:41:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:42.836 07:41:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:42.836 07:41:27 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.836 1+0 records in 00:05:42.836 1+0 records out 00:05:42.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227654 s, 18.0 MB/s 00:05:42.836 07:41:27 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.836 07:41:27 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:42.836 07:41:27 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.836 07:41:27 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:42.836 07:41:27 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:42.836 07:41:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.836 07:41:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.836 07:41:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:43.096 /dev/nbd1 00:05:43.096 07:41:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:43.096 07:41:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:43.096 07:41:27 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:43.096 07:41:27 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:43.096 07:41:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:43.096 07:41:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:43.096 07:41:27 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:43.096 07:41:27 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:43.096 07:41:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:43.096 07:41:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:43.096 07:41:27 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.096 1+0 records in 00:05:43.096 1+0 records out 00:05:43.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225338 s, 18.2 MB/s 00:05:43.096 07:41:27 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.096 07:41:27 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:43.096 07:41:27 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.096 07:41:27 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:43.096 07:41:27 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:43.096 07:41:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.096 07:41:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.096 07:41:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.096 07:41:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.096 07:41:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.356 07:41:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:43.356 { 00:05:43.356 "nbd_device": "/dev/nbd0", 00:05:43.356 "bdev_name": "Malloc0" 00:05:43.356 }, 00:05:43.356 { 00:05:43.356 "nbd_device": "/dev/nbd1", 00:05:43.356 "bdev_name": "Malloc1" 00:05:43.356 } 00:05:43.356 ]' 00:05:43.356 07:41:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:43.356 { 00:05:43.356 "nbd_device": "/dev/nbd0", 00:05:43.356 "bdev_name": "Malloc0" 00:05:43.356 }, 00:05:43.356 { 00:05:43.356 "nbd_device": "/dev/nbd1", 00:05:43.356 "bdev_name": "Malloc1" 00:05:43.356 } 00:05:43.356 ]' 00:05:43.356 07:41:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.356 07:41:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:43.356 /dev/nbd1' 00:05:43.356 07:41:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:43.356 /dev/nbd1' 00:05:43.356 07:41:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.356 07:41:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:43.356 07:41:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:43.356 07:41:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:43.356 07:41:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:43.356 07:41:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:43.356 07:41:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.356 07:41:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.356 07:41:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:43.356 07:41:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.356 07:41:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:43.356 07:41:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:43.356 256+0 records in 00:05:43.356 256+0 records out 00:05:43.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103985 s, 101 MB/s 00:05:43.356 07:41:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.356 07:41:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:43.356 256+0 records in 00:05:43.356 256+0 records out 00:05:43.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136419 s, 76.9 MB/s 00:05:43.356 07:41:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.356 07:41:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:43.356 256+0 records in 00:05:43.356 256+0 records out 00:05:43.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145258 s, 72.2 MB/s 00:05:43.356 07:41:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:43.356 07:41:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.356 07:41:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.356 07:41:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:43.356 07:41:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.356 07:41:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:43.356 07:41:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:43.356 07:41:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.356 07:41:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:43.356 07:41:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.356 07:41:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:43.356 07:41:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.356 07:41:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:43.356 07:41:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.356 07:41:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.356 07:41:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:43.356 07:41:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:43.356 07:41:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.356 07:41:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:43.615 07:41:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:43.615 07:41:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:43.615 07:41:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:43.616 07:41:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.616 07:41:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.616 07:41:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:43.616 07:41:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.616 07:41:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.616 07:41:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.616 07:41:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:43.875 07:41:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:43.875 07:41:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:43.875 07:41:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:43.875 07:41:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.875 07:41:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.875 07:41:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:43.875 07:41:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.875 07:41:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.875 07:41:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.875 07:41:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.875 07:41:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.134 07:41:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:44.134 07:41:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:44.134 07:41:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.134 07:41:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:44.134 07:41:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:44.134 07:41:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.134 07:41:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:44.134 07:41:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:44.134 07:41:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:44.134 07:41:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:44.134 07:41:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:44.134 07:41:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:44.134 07:41:28 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:44.393 07:41:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:44.393 [2024-07-15 07:41:29.076911] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.393 [2024-07-15 07:41:29.143553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.394 [2024-07-15 07:41:29.143553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.653 [2024-07-15 07:41:29.185267] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:44.653 [2024-07-15 07:41:29.185307] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:47.242 07:41:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:47.242 07:41:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:47.242 spdk_app_start Round 2 00:05:47.242 07:41:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3077076 /var/tmp/spdk-nbd.sock 00:05:47.242 07:41:31 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3077076 ']' 00:05:47.242 07:41:31 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.242 07:41:31 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.242 07:41:31 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.242 07:41:31 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.242 07:41:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.501 07:41:32 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.501 07:41:32 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:47.501 07:41:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.761 Malloc0 00:05:47.761 07:41:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.761 Malloc1 00:05:47.761 07:41:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.761 07:41:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.761 07:41:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.761 07:41:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:47.761 07:41:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.761 07:41:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:47.761 07:41:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.761 07:41:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.761 07:41:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.761 07:41:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:47.761 07:41:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.761 07:41:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:47.761 07:41:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:47.761 07:41:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:47.761 07:41:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.761 07:41:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:48.019 /dev/nbd0 00:05:48.020 07:41:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.020 07:41:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.020 07:41:32 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:48.020 07:41:32 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:48.020 07:41:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:48.020 07:41:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:48.020 07:41:32 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:48.020 07:41:32 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:48.020 07:41:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:48.020 07:41:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:48.020 07:41:32 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.020 1+0 records in 00:05:48.020 1+0 records out 00:05:48.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227878 s, 18.0 MB/s 00:05:48.020 07:41:32 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.020 07:41:32 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:48.020 07:41:32 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.020 07:41:32 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:48.020 07:41:32 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:48.020 07:41:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.020 07:41:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.020 07:41:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:48.279 /dev/nbd1 00:05:48.279 07:41:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:48.279 07:41:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:48.279 07:41:32 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:48.279 07:41:32 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:48.279 07:41:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:48.279 07:41:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:48.279 07:41:32 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:48.279 07:41:32 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:48.279 07:41:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:48.279 07:41:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:48.279 07:41:32 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.279 1+0 records in 00:05:48.279 1+0 records out 00:05:48.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223212 s, 18.4 MB/s 00:05:48.279 07:41:32 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.279 07:41:32 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:48.279 07:41:32 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.279 07:41:32 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:48.279 07:41:32 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:48.279 07:41:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.279 07:41:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.279 07:41:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.279 07:41:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.279 07:41:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:48.538 { 00:05:48.538 "nbd_device": "/dev/nbd0", 00:05:48.538 "bdev_name": "Malloc0" 00:05:48.538 }, 00:05:48.538 { 00:05:48.538 "nbd_device": "/dev/nbd1", 00:05:48.538 "bdev_name": "Malloc1" 00:05:48.538 } 00:05:48.538 ]' 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:48.538 { 00:05:48.538 "nbd_device": "/dev/nbd0", 00:05:48.538 "bdev_name": "Malloc0" 00:05:48.538 }, 00:05:48.538 { 00:05:48.538 "nbd_device": "/dev/nbd1", 00:05:48.538 "bdev_name": "Malloc1" 00:05:48.538 } 00:05:48.538 ]' 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:48.538 /dev/nbd1' 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:48.538 /dev/nbd1' 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:48.538 256+0 records in 00:05:48.538 256+0 records out 00:05:48.538 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103523 s, 101 MB/s 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:48.538 256+0 records in 00:05:48.538 256+0 records out 00:05:48.538 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147392 s, 71.1 MB/s 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:48.538 256+0 records in 00:05:48.538 256+0 records out 00:05:48.538 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149895 s, 70.0 MB/s 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.538 07:41:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:48.796 07:41:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:48.796 07:41:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:48.796 07:41:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:48.796 07:41:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.796 07:41:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.796 07:41:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:48.796 07:41:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:48.796 07:41:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.796 07:41:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.796 07:41:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.053 07:41:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.053 07:41:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.053 07:41:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.053 07:41:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.053 07:41:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.053 07:41:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.053 07:41:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.053 07:41:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.053 07:41:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.054 07:41:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.054 07:41:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.054 07:41:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.054 07:41:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.054 07:41:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.312 07:41:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:49.312 07:41:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:49.312 07:41:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.312 07:41:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:49.312 07:41:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:49.312 07:41:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:49.312 07:41:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:49.312 07:41:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:49.312 07:41:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:49.312 07:41:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:49.312 07:41:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:49.570 [2024-07-15 07:41:34.223789] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.570 [2024-07-15 07:41:34.290526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.570 [2024-07-15 07:41:34.290527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.828 [2024-07-15 07:41:34.331465] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:49.828 [2024-07-15 07:41:34.331505] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:52.361 07:41:37 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3077076 /var/tmp/spdk-nbd.sock 00:05:52.361 07:41:37 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3077076 ']' 00:05:52.361 07:41:37 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.361 07:41:37 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.361 07:41:37 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.361 07:41:37 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.361 07:41:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:52.620 07:41:37 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.620 07:41:37 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:52.620 07:41:37 event.app_repeat -- event/event.sh@39 -- # killprocess 3077076 00:05:52.620 07:41:37 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 3077076 ']' 00:05:52.620 07:41:37 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 3077076 00:05:52.620 07:41:37 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:52.620 07:41:37 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:52.620 07:41:37 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3077076 00:05:52.620 07:41:37 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:52.620 07:41:37 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:52.620 07:41:37 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3077076' 00:05:52.620 killing process with pid 3077076 00:05:52.620 07:41:37 event.app_repeat -- common/autotest_common.sh@967 -- # kill 3077076 00:05:52.620 07:41:37 event.app_repeat -- common/autotest_common.sh@972 -- # wait 3077076 00:05:52.879 spdk_app_start is called in Round 0. 00:05:52.879 Shutdown signal received, stop current app iteration 00:05:52.879 Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 reinitialization... 00:05:52.879 spdk_app_start is called in Round 1. 00:05:52.879 Shutdown signal received, stop current app iteration 00:05:52.879 Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 reinitialization... 00:05:52.879 spdk_app_start is called in Round 2. 00:05:52.879 Shutdown signal received, stop current app iteration 00:05:52.879 Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 reinitialization... 00:05:52.879 spdk_app_start is called in Round 3. 00:05:52.879 Shutdown signal received, stop current app iteration 00:05:52.879 07:41:37 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:52.879 07:41:37 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:52.879 00:05:52.879 real 0m16.467s 00:05:52.879 user 0m35.807s 00:05:52.879 sys 0m2.377s 00:05:52.879 07:41:37 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.879 07:41:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:52.879 ************************************ 00:05:52.879 END TEST app_repeat 00:05:52.879 ************************************ 00:05:52.879 07:41:37 event -- common/autotest_common.sh@1142 -- # return 0 00:05:52.879 07:41:37 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:52.879 07:41:37 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:52.879 07:41:37 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.879 07:41:37 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.879 07:41:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.879 ************************************ 00:05:52.879 START TEST cpu_locks 00:05:52.879 ************************************ 00:05:52.879 07:41:37 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:52.879 * Looking for test storage... 00:05:52.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:52.879 07:41:37 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:52.879 07:41:37 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:52.879 07:41:37 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:52.880 07:41:37 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:52.880 07:41:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.880 07:41:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.880 07:41:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.139 ************************************ 00:05:53.139 START TEST default_locks 00:05:53.139 ************************************ 00:05:53.139 07:41:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:53.139 07:41:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3080144 00:05:53.139 07:41:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3080144 00:05:53.139 07:41:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.139 07:41:37 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3080144 ']' 00:05:53.139 07:41:37 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.139 07:41:37 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.139 07:41:37 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.139 07:41:37 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.139 07:41:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.139 [2024-07-15 07:41:37.686689] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:05:53.139 [2024-07-15 07:41:37.686740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3080144 ] 00:05:53.139 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.139 [2024-07-15 07:41:37.753600] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.139 [2024-07-15 07:41:37.833120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.075 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.075 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:54.075 07:41:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3080144 00:05:54.075 07:41:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3080144 00:05:54.075 07:41:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.075 lslocks: write error 00:05:54.075 07:41:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3080144 00:05:54.075 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 3080144 ']' 00:05:54.075 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 3080144 00:05:54.075 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:54.075 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.075 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3080144 00:05:54.075 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.075 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.075 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3080144' 00:05:54.075 killing process with pid 3080144 00:05:54.075 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 3080144 00:05:54.075 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 3080144 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3080144 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3080144 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3080144 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3080144 ']' 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3080144) - No such process 00:05:54.334 ERROR: process (pid: 3080144) is no longer running 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:54.334 00:05:54.334 real 0m1.328s 00:05:54.334 user 0m1.394s 00:05:54.334 sys 0m0.414s 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.334 07:41:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.334 ************************************ 00:05:54.334 END TEST default_locks 00:05:54.334 ************************************ 00:05:54.334 07:41:38 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:54.334 07:41:38 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:54.334 07:41:38 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.334 07:41:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.334 07:41:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.334 ************************************ 00:05:54.334 START TEST default_locks_via_rpc 00:05:54.334 ************************************ 00:05:54.334 07:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:54.334 07:41:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3080416 00:05:54.334 07:41:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3080416 00:05:54.334 07:41:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.334 07:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3080416 ']' 00:05:54.334 07:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.334 07:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.334 07:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.334 07:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.334 07:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.334 [2024-07-15 07:41:39.080893] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:05:54.334 [2024-07-15 07:41:39.080934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3080416 ] 00:05:54.594 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.594 [2024-07-15 07:41:39.131342] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.594 [2024-07-15 07:41:39.211050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.164 07:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.164 07:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:55.164 07:41:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:55.164 07:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.164 07:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.164 07:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.164 07:41:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:55.164 07:41:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:55.164 07:41:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:55.164 07:41:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:55.164 07:41:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:55.164 07:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.164 07:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.164 07:41:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.164 07:41:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3080416 00:05:55.164 07:41:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3080416 00:05:55.164 07:41:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.733 07:41:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3080416 00:05:55.733 07:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 3080416 ']' 00:05:55.733 07:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 3080416 00:05:55.733 07:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:55.733 07:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.733 07:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3080416 00:05:55.733 07:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.733 07:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.733 07:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3080416' 00:05:55.733 killing process with pid 3080416 00:05:55.733 07:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 3080416 00:05:55.733 07:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 3080416 00:05:55.992 00:05:55.992 real 0m1.550s 00:05:55.992 user 0m1.648s 00:05:55.992 sys 0m0.485s 00:05:55.992 07:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.992 07:41:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.992 ************************************ 00:05:55.992 END TEST default_locks_via_rpc 00:05:55.992 ************************************ 00:05:55.992 07:41:40 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:55.992 07:41:40 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:55.992 07:41:40 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.992 07:41:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.992 07:41:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.992 ************************************ 00:05:55.992 START TEST non_locking_app_on_locked_coremask 00:05:55.992 ************************************ 00:05:55.992 07:41:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:55.992 07:41:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3080810 00:05:55.992 07:41:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3080810 /var/tmp/spdk.sock 00:05:55.992 07:41:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.992 07:41:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3080810 ']' 00:05:55.992 07:41:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.992 07:41:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.992 07:41:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.992 07:41:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.992 07:41:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.992 [2024-07-15 07:41:40.698498] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:05:55.992 [2024-07-15 07:41:40.698542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3080810 ] 00:05:55.993 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.251 [2024-07-15 07:41:40.765400] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.251 [2024-07-15 07:41:40.835909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.820 07:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.820 07:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:56.820 07:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:56.820 07:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3080826 00:05:56.820 07:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3080826 /var/tmp/spdk2.sock 00:05:56.820 07:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3080826 ']' 00:05:56.820 07:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.820 07:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.820 07:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.821 07:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.821 07:41:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.821 [2024-07-15 07:41:41.545443] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:05:56.821 [2024-07-15 07:41:41.545490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3080826 ] 00:05:56.821 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.080 [2024-07-15 07:41:41.621754] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.080 [2024-07-15 07:41:41.621780] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.080 [2024-07-15 07:41:41.772404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.649 07:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.649 07:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:57.649 07:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3080810 00:05:57.649 07:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3080810 00:05:57.649 07:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.217 lslocks: write error 00:05:58.217 07:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3080810 00:05:58.217 07:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3080810 ']' 00:05:58.217 07:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3080810 00:05:58.217 07:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:58.217 07:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.217 07:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3080810 00:05:58.217 07:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.217 07:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.217 07:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3080810' 00:05:58.217 killing process with pid 3080810 00:05:58.217 07:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3080810 00:05:58.217 07:41:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3080810 00:05:59.154 07:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3080826 00:05:59.154 07:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3080826 ']' 00:05:59.154 07:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3080826 00:05:59.154 07:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:59.154 07:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.154 07:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3080826 00:05:59.154 07:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.154 07:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.154 07:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3080826' 00:05:59.154 killing process with pid 3080826 00:05:59.154 07:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3080826 00:05:59.154 07:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3080826 00:05:59.412 00:05:59.412 real 0m3.291s 00:05:59.412 user 0m3.513s 00:05:59.412 sys 0m0.928s 00:05:59.412 07:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.413 07:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.413 ************************************ 00:05:59.413 END TEST non_locking_app_on_locked_coremask 00:05:59.413 ************************************ 00:05:59.413 07:41:43 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:59.413 07:41:43 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:59.413 07:41:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.413 07:41:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.413 07:41:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.413 ************************************ 00:05:59.413 START TEST locking_app_on_unlocked_coremask 00:05:59.413 ************************************ 00:05:59.413 07:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:59.413 07:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3081317 00:05:59.413 07:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3081317 /var/tmp/spdk.sock 00:05:59.413 07:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:59.413 07:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3081317 ']' 00:05:59.413 07:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.413 07:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.413 07:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.413 07:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.413 07:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.413 [2024-07-15 07:41:44.057981] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:05:59.413 [2024-07-15 07:41:44.058022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3081317 ] 00:05:59.413 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.413 [2024-07-15 07:41:44.122214] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.413 [2024-07-15 07:41:44.122242] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.671 [2024-07-15 07:41:44.189558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.240 07:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.240 07:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:00.240 07:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:00.240 07:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3081546 00:06:00.240 07:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3081546 /var/tmp/spdk2.sock 00:06:00.240 07:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3081546 ']' 00:06:00.240 07:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.240 07:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.240 07:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.240 07:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.240 07:41:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.240 [2024-07-15 07:41:44.889012] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:00.240 [2024-07-15 07:41:44.889061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3081546 ] 00:06:00.240 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.240 [2024-07-15 07:41:44.964109] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.499 [2024-07-15 07:41:45.110657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.068 07:41:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.068 07:41:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:01.068 07:41:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3081546 00:06:01.068 07:41:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3081546 00:06:01.068 07:41:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.635 lslocks: write error 00:06:01.635 07:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3081317 00:06:01.635 07:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3081317 ']' 00:06:01.635 07:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3081317 00:06:01.635 07:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:01.635 07:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.635 07:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3081317 00:06:01.635 07:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.635 07:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.635 07:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3081317' 00:06:01.635 killing process with pid 3081317 00:06:01.635 07:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3081317 00:06:01.635 07:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3081317 00:06:02.201 07:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3081546 00:06:02.201 07:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3081546 ']' 00:06:02.201 07:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3081546 00:06:02.201 07:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:02.201 07:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:02.201 07:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3081546 00:06:02.201 07:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:02.201 07:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:02.201 07:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3081546' 00:06:02.201 killing process with pid 3081546 00:06:02.201 07:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3081546 00:06:02.201 07:41:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3081546 00:06:02.460 00:06:02.460 real 0m3.109s 00:06:02.460 user 0m3.319s 00:06:02.460 sys 0m0.881s 00:06:02.460 07:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.460 07:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.460 ************************************ 00:06:02.460 END TEST locking_app_on_unlocked_coremask 00:06:02.460 ************************************ 00:06:02.460 07:41:47 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:02.460 07:41:47 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:02.460 07:41:47 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.460 07:41:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.460 07:41:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.460 ************************************ 00:06:02.460 START TEST locking_app_on_locked_coremask 00:06:02.460 ************************************ 00:06:02.460 07:41:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:02.460 07:41:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3081861 00:06:02.460 07:41:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3081861 /var/tmp/spdk.sock 00:06:02.460 07:41:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.460 07:41:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3081861 ']' 00:06:02.460 07:41:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.460 07:41:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.460 07:41:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.460 07:41:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.460 07:41:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.719 [2024-07-15 07:41:47.232291] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:02.719 [2024-07-15 07:41:47.232336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3081861 ] 00:06:02.719 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.719 [2024-07-15 07:41:47.298629] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.719 [2024-07-15 07:41:47.378731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.285 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.285 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:03.285 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:03.285 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3082046 00:06:03.285 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3082046 /var/tmp/spdk2.sock 00:06:03.285 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:03.285 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3082046 /var/tmp/spdk2.sock 00:06:03.285 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:03.285 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.285 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:03.285 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.285 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3082046 /var/tmp/spdk2.sock 00:06:03.285 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3082046 ']' 00:06:03.285 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.285 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.285 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.285 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.285 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.543 [2024-07-15 07:41:48.061541] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:03.543 [2024-07-15 07:41:48.061586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3082046 ] 00:06:03.543 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.543 [2024-07-15 07:41:48.137399] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3081861 has claimed it. 00:06:03.543 [2024-07-15 07:41:48.137431] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:04.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3082046) - No such process 00:06:04.129 ERROR: process (pid: 3082046) is no longer running 00:06:04.129 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.129 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:04.129 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:04.129 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:04.129 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:04.129 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:04.129 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3081861 00:06:04.129 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3081861 00:06:04.129 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.406 lslocks: write error 00:06:04.406 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3081861 00:06:04.406 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3081861 ']' 00:06:04.406 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3081861 00:06:04.406 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:04.406 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.406 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3081861 00:06:04.406 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.406 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.406 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3081861' 00:06:04.406 killing process with pid 3081861 00:06:04.406 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3081861 00:06:04.406 07:41:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3081861 00:06:04.666 00:06:04.666 real 0m2.056s 00:06:04.666 user 0m2.236s 00:06:04.666 sys 0m0.549s 00:06:04.666 07:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.666 07:41:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.666 ************************************ 00:06:04.666 END TEST locking_app_on_locked_coremask 00:06:04.666 ************************************ 00:06:04.666 07:41:49 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:04.666 07:41:49 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:04.666 07:41:49 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.666 07:41:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.666 07:41:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.666 ************************************ 00:06:04.666 START TEST locking_overlapped_coremask 00:06:04.666 ************************************ 00:06:04.666 07:41:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:04.666 07:41:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3082307 00:06:04.666 07:41:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3082307 /var/tmp/spdk.sock 00:06:04.666 07:41:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:04.666 07:41:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3082307 ']' 00:06:04.666 07:41:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.666 07:41:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.666 07:41:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.666 07:41:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.666 07:41:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.666 [2024-07-15 07:41:49.353931] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:04.666 [2024-07-15 07:41:49.353975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3082307 ] 00:06:04.666 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.924 [2024-07-15 07:41:49.419505] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.925 [2024-07-15 07:41:49.501409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.925 [2024-07-15 07:41:49.501515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.925 [2024-07-15 07:41:49.501516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.492 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.492 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:05.492 07:41:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:05.492 07:41:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3082507 00:06:05.492 07:41:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3082507 /var/tmp/spdk2.sock 00:06:05.492 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:05.492 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3082507 /var/tmp/spdk2.sock 00:06:05.492 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:05.492 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.492 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:05.492 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.492 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3082507 /var/tmp/spdk2.sock 00:06:05.492 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3082507 ']' 00:06:05.492 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.492 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.492 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.492 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.492 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.493 [2024-07-15 07:41:50.195138] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:05.493 [2024-07-15 07:41:50.195188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3082507 ] 00:06:05.493 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.751 [2024-07-15 07:41:50.269132] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3082307 has claimed it. 00:06:05.751 [2024-07-15 07:41:50.269163] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:06.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3082507) - No such process 00:06:06.320 ERROR: process (pid: 3082507) is no longer running 00:06:06.320 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.320 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:06.320 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:06.320 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:06.320 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:06.320 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:06.320 07:41:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:06.320 07:41:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:06.320 07:41:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:06.320 07:41:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:06.320 07:41:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3082307 00:06:06.320 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 3082307 ']' 00:06:06.320 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 3082307 00:06:06.320 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:06.320 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.320 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3082307 00:06:06.320 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:06.320 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:06.320 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3082307' 00:06:06.320 killing process with pid 3082307 00:06:06.320 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 3082307 00:06:06.320 07:41:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 3082307 00:06:06.580 00:06:06.580 real 0m1.898s 00:06:06.580 user 0m5.342s 00:06:06.580 sys 0m0.391s 00:06:06.580 07:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.580 07:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.580 ************************************ 00:06:06.580 END TEST locking_overlapped_coremask 00:06:06.580 ************************************ 00:06:06.580 07:41:51 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:06.580 07:41:51 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:06.580 07:41:51 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.580 07:41:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.580 07:41:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.580 ************************************ 00:06:06.580 START TEST locking_overlapped_coremask_via_rpc 00:06:06.580 ************************************ 00:06:06.580 07:41:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:06.580 07:41:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3082598 00:06:06.580 07:41:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3082598 /var/tmp/spdk.sock 00:06:06.580 07:41:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:06.580 07:41:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3082598 ']' 00:06:06.580 07:41:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.580 07:41:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.580 07:41:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.580 07:41:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.580 07:41:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.580 [2024-07-15 07:41:51.317273] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:06.580 [2024-07-15 07:41:51.317316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3082598 ] 00:06:06.839 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.839 [2024-07-15 07:41:51.386923] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:06.840 [2024-07-15 07:41:51.386949] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.840 [2024-07-15 07:41:51.466775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.840 [2024-07-15 07:41:51.466884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.840 [2024-07-15 07:41:51.466884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.408 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.408 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:07.408 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:07.408 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3082814 00:06:07.408 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3082814 /var/tmp/spdk2.sock 00:06:07.408 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3082814 ']' 00:06:07.408 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.408 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.408 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.408 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.408 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.408 [2024-07-15 07:41:52.156358] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:07.408 [2024-07-15 07:41:52.156407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3082814 ] 00:06:07.667 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.667 [2024-07-15 07:41:52.231233] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.667 [2024-07-15 07:41:52.231262] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.667 [2024-07-15 07:41:52.382295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.667 [2024-07-15 07:41:52.382411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.667 [2024-07-15 07:41:52.382412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:08.237 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.237 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:08.237 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:08.237 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.237 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.237 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.237 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.237 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:08.237 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.237 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:08.237 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.237 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:08.237 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.237 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.237 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.237 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.237 [2024-07-15 07:41:52.989301] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3082598 has claimed it. 00:06:08.497 request: 00:06:08.497 { 00:06:08.497 "method": "framework_enable_cpumask_locks", 00:06:08.497 "req_id": 1 00:06:08.497 } 00:06:08.497 Got JSON-RPC error response 00:06:08.497 response: 00:06:08.497 { 00:06:08.497 "code": -32603, 00:06:08.497 "message": "Failed to claim CPU core: 2" 00:06:08.497 } 00:06:08.497 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:08.497 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:08.497 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:08.497 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:08.497 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:08.497 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3082598 /var/tmp/spdk.sock 00:06:08.497 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3082598 ']' 00:06:08.497 07:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.497 07:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.497 07:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.497 07:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.497 07:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.497 07:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.497 07:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:08.497 07:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3082814 /var/tmp/spdk2.sock 00:06:08.497 07:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3082814 ']' 00:06:08.497 07:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.497 07:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.497 07:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.497 07:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.497 07:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.756 07:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.756 07:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:08.756 07:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:08.756 07:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:08.756 07:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:08.756 07:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:08.756 00:06:08.756 real 0m2.097s 00:06:08.756 user 0m0.860s 00:06:08.756 sys 0m0.173s 00:06:08.756 07:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.756 07:41:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.756 ************************************ 00:06:08.756 END TEST locking_overlapped_coremask_via_rpc 00:06:08.756 ************************************ 00:06:08.756 07:41:53 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:08.756 07:41:53 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:08.756 07:41:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3082598 ]] 00:06:08.756 07:41:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3082598 00:06:08.757 07:41:53 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3082598 ']' 00:06:08.757 07:41:53 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3082598 00:06:08.757 07:41:53 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:08.757 07:41:53 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:08.757 07:41:53 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3082598 00:06:08.757 07:41:53 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:08.757 07:41:53 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:08.757 07:41:53 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3082598' 00:06:08.757 killing process with pid 3082598 00:06:08.757 07:41:53 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3082598 00:06:08.757 07:41:53 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3082598 00:06:09.015 07:41:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3082814 ]] 00:06:09.015 07:41:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3082814 00:06:09.015 07:41:53 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3082814 ']' 00:06:09.015 07:41:53 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3082814 00:06:09.015 07:41:53 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:09.016 07:41:53 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.016 07:41:53 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3082814 00:06:09.274 07:41:53 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:09.274 07:41:53 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:09.274 07:41:53 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3082814' 00:06:09.274 killing process with pid 3082814 00:06:09.274 07:41:53 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3082814 00:06:09.274 07:41:53 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3082814 00:06:09.534 07:41:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:09.534 07:41:54 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:09.534 07:41:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3082598 ]] 00:06:09.534 07:41:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3082598 00:06:09.534 07:41:54 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3082598 ']' 00:06:09.534 07:41:54 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3082598 00:06:09.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3082598) - No such process 00:06:09.534 07:41:54 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3082598 is not found' 00:06:09.534 Process with pid 3082598 is not found 00:06:09.534 07:41:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3082814 ]] 00:06:09.534 07:41:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3082814 00:06:09.534 07:41:54 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3082814 ']' 00:06:09.534 07:41:54 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3082814 00:06:09.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3082814) - No such process 00:06:09.534 07:41:54 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3082814 is not found' 00:06:09.534 Process with pid 3082814 is not found 00:06:09.534 07:41:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:09.534 00:06:09.534 real 0m16.614s 00:06:09.534 user 0m28.741s 00:06:09.534 sys 0m4.741s 00:06:09.534 07:41:54 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.534 07:41:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.534 ************************************ 00:06:09.534 END TEST cpu_locks 00:06:09.534 ************************************ 00:06:09.534 07:41:54 event -- common/autotest_common.sh@1142 -- # return 0 00:06:09.534 00:06:09.534 real 0m41.614s 00:06:09.534 user 1m19.375s 00:06:09.534 sys 0m8.080s 00:06:09.534 07:41:54 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.534 07:41:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.534 ************************************ 00:06:09.534 END TEST event 00:06:09.535 ************************************ 00:06:09.535 07:41:54 -- common/autotest_common.sh@1142 -- # return 0 00:06:09.535 07:41:54 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:09.535 07:41:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.535 07:41:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.535 07:41:54 -- common/autotest_common.sh@10 -- # set +x 00:06:09.535 ************************************ 00:06:09.535 START TEST thread 00:06:09.535 ************************************ 00:06:09.535 07:41:54 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:09.794 * Looking for test storage... 00:06:09.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:09.794 07:41:54 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:09.794 07:41:54 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:09.794 07:41:54 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.794 07:41:54 thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.794 ************************************ 00:06:09.794 START TEST thread_poller_perf 00:06:09.794 ************************************ 00:06:09.794 07:41:54 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:09.794 [2024-07-15 07:41:54.369413] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:09.794 [2024-07-15 07:41:54.369483] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3083361 ] 00:06:09.794 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.794 [2024-07-15 07:41:54.440247] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.794 [2024-07-15 07:41:54.512841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.794 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:11.175 ====================================== 00:06:11.175 busy:2309157178 (cyc) 00:06:11.175 total_run_count: 415000 00:06:11.175 tsc_hz: 2300000000 (cyc) 00:06:11.175 ====================================== 00:06:11.175 poller_cost: 5564 (cyc), 2419 (nsec) 00:06:11.175 00:06:11.175 real 0m1.238s 00:06:11.175 user 0m1.144s 00:06:11.175 sys 0m0.089s 00:06:11.175 07:41:55 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.175 07:41:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.175 ************************************ 00:06:11.175 END TEST thread_poller_perf 00:06:11.175 ************************************ 00:06:11.175 07:41:55 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:11.175 07:41:55 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:11.175 07:41:55 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:11.175 07:41:55 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.175 07:41:55 thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.175 ************************************ 00:06:11.175 START TEST thread_poller_perf 00:06:11.175 ************************************ 00:06:11.175 07:41:55 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:11.175 [2024-07-15 07:41:55.672656] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:11.175 [2024-07-15 07:41:55.672724] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3083576 ] 00:06:11.175 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.175 [2024-07-15 07:41:55.743710] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.175 [2024-07-15 07:41:55.816210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.175 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:12.556 ====================================== 00:06:12.556 busy:2301669156 (cyc) 00:06:12.556 total_run_count: 5471000 00:06:12.556 tsc_hz: 2300000000 (cyc) 00:06:12.556 ====================================== 00:06:12.556 poller_cost: 420 (cyc), 182 (nsec) 00:06:12.556 00:06:12.556 real 0m1.235s 00:06:12.556 user 0m1.152s 00:06:12.556 sys 0m0.080s 00:06:12.556 07:41:56 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.556 07:41:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:12.556 ************************************ 00:06:12.556 END TEST thread_poller_perf 00:06:12.556 ************************************ 00:06:12.556 07:41:56 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:12.556 07:41:56 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:12.556 00:06:12.556 real 0m2.695s 00:06:12.556 user 0m2.388s 00:06:12.556 sys 0m0.317s 00:06:12.556 07:41:56 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.556 07:41:56 thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.556 ************************************ 00:06:12.556 END TEST thread 00:06:12.556 ************************************ 00:06:12.556 07:41:56 -- common/autotest_common.sh@1142 -- # return 0 00:06:12.556 07:41:56 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:12.556 07:41:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.556 07:41:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.556 07:41:56 -- common/autotest_common.sh@10 -- # set +x 00:06:12.556 ************************************ 00:06:12.556 START TEST accel 00:06:12.556 ************************************ 00:06:12.556 07:41:56 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:12.556 * Looking for test storage... 00:06:12.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:12.556 07:41:57 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:12.556 07:41:57 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:12.556 07:41:57 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:12.556 07:41:57 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3083898 00:06:12.556 07:41:57 accel -- accel/accel.sh@63 -- # waitforlisten 3083898 00:06:12.556 07:41:57 accel -- common/autotest_common.sh@829 -- # '[' -z 3083898 ']' 00:06:12.556 07:41:57 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.556 07:41:57 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:12.556 07:41:57 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.556 07:41:57 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:12.556 07:41:57 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.556 07:41:57 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.556 07:41:57 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.556 07:41:57 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.556 07:41:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.556 07:41:57 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.556 07:41:57 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.556 07:41:57 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.556 07:41:57 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:12.556 07:41:57 accel -- accel/accel.sh@41 -- # jq -r . 00:06:12.556 [2024-07-15 07:41:57.125113] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:12.556 [2024-07-15 07:41:57.125164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3083898 ] 00:06:12.556 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.556 [2024-07-15 07:41:57.190985] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.556 [2024-07-15 07:41:57.264508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.493 07:41:57 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.493 07:41:57 accel -- common/autotest_common.sh@862 -- # return 0 00:06:13.493 07:41:57 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:13.493 07:41:57 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:13.493 07:41:57 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:13.493 07:41:57 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:13.493 07:41:57 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:13.493 07:41:57 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:13.493 07:41:57 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:13.493 07:41:57 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.493 07:41:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.493 07:41:57 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.493 07:41:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.493 07:41:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.493 07:41:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.493 07:41:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.493 07:41:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.493 07:41:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.493 07:41:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.493 07:41:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.493 07:41:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.493 07:41:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.493 07:41:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.493 07:41:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.493 07:41:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.493 07:41:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.493 07:41:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.493 07:41:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.493 07:41:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.493 07:41:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.493 07:41:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.493 07:41:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.493 07:41:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.493 07:41:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.493 07:41:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.493 07:41:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.493 07:41:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.493 07:41:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.493 07:41:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.493 07:41:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.493 07:41:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.493 07:41:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.493 07:41:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.493 07:41:57 accel -- accel/accel.sh@75 -- # killprocess 3083898 00:06:13.493 07:41:57 accel -- common/autotest_common.sh@948 -- # '[' -z 3083898 ']' 00:06:13.494 07:41:57 accel -- common/autotest_common.sh@952 -- # kill -0 3083898 00:06:13.494 07:41:57 accel -- common/autotest_common.sh@953 -- # uname 00:06:13.494 07:41:57 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.494 07:41:57 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3083898 00:06:13.494 07:41:58 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:13.494 07:41:58 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:13.494 07:41:58 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3083898' 00:06:13.494 killing process with pid 3083898 00:06:13.494 07:41:58 accel -- common/autotest_common.sh@967 -- # kill 3083898 00:06:13.494 07:41:58 accel -- common/autotest_common.sh@972 -- # wait 3083898 00:06:13.753 07:41:58 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:13.753 07:41:58 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:13.753 07:41:58 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:13.753 07:41:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.753 07:41:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.753 07:41:58 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:13.753 07:41:58 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:13.753 07:41:58 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:13.753 07:41:58 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.753 07:41:58 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.753 07:41:58 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.753 07:41:58 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.753 07:41:58 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.753 07:41:58 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:13.753 07:41:58 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:13.753 07:41:58 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.753 07:41:58 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:13.753 07:41:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:13.753 07:41:58 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:13.753 07:41:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:13.753 07:41:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.753 07:41:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.753 ************************************ 00:06:13.753 START TEST accel_missing_filename 00:06:13.753 ************************************ 00:06:13.753 07:41:58 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:13.753 07:41:58 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:13.753 07:41:58 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:13.753 07:41:58 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:13.753 07:41:58 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.753 07:41:58 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:13.753 07:41:58 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.753 07:41:58 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:13.753 07:41:58 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:13.753 07:41:58 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:13.753 07:41:58 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.753 07:41:58 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.753 07:41:58 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.753 07:41:58 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.753 07:41:58 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.753 07:41:58 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:13.753 07:41:58 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:13.753 [2024-07-15 07:41:58.485080] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:13.753 [2024-07-15 07:41:58.485132] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3084169 ] 00:06:14.014 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.014 [2024-07-15 07:41:58.554530] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.014 [2024-07-15 07:41:58.630133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.014 [2024-07-15 07:41:58.670794] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:14.014 [2024-07-15 07:41:58.730304] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:14.273 A filename is required. 00:06:14.273 07:41:58 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:14.273 07:41:58 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:14.273 07:41:58 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:14.273 07:41:58 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:14.273 07:41:58 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:14.273 07:41:58 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:14.273 00:06:14.273 real 0m0.345s 00:06:14.273 user 0m0.253s 00:06:14.273 sys 0m0.130s 00:06:14.273 07:41:58 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.273 07:41:58 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:14.273 ************************************ 00:06:14.273 END TEST accel_missing_filename 00:06:14.273 ************************************ 00:06:14.273 07:41:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.273 07:41:58 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:14.273 07:41:58 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:14.273 07:41:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.273 07:41:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.273 ************************************ 00:06:14.273 START TEST accel_compress_verify 00:06:14.273 ************************************ 00:06:14.273 07:41:58 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:14.273 07:41:58 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:14.273 07:41:58 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:14.273 07:41:58 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:14.273 07:41:58 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.273 07:41:58 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:14.273 07:41:58 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.273 07:41:58 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:14.273 07:41:58 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:14.273 07:41:58 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:14.273 07:41:58 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.273 07:41:58 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.273 07:41:58 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.273 07:41:58 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.273 07:41:58 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.273 07:41:58 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:14.273 07:41:58 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:14.273 [2024-07-15 07:41:58.898871] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:14.273 [2024-07-15 07:41:58.898939] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3084202 ] 00:06:14.273 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.273 [2024-07-15 07:41:58.969031] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.532 [2024-07-15 07:41:59.042292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.532 [2024-07-15 07:41:59.082881] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:14.532 [2024-07-15 07:41:59.142345] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:14.532 00:06:14.532 Compression does not support the verify option, aborting. 00:06:14.532 07:41:59 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:14.532 07:41:59 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:14.532 07:41:59 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:14.532 07:41:59 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:14.532 07:41:59 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:14.532 07:41:59 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:14.532 00:06:14.532 real 0m0.346s 00:06:14.532 user 0m0.250s 00:06:14.532 sys 0m0.135s 00:06:14.532 07:41:59 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.532 07:41:59 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:14.532 ************************************ 00:06:14.532 END TEST accel_compress_verify 00:06:14.532 ************************************ 00:06:14.532 07:41:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.532 07:41:59 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:14.532 07:41:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:14.532 07:41:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.532 07:41:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.532 ************************************ 00:06:14.532 START TEST accel_wrong_workload 00:06:14.532 ************************************ 00:06:14.532 07:41:59 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:14.532 07:41:59 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:14.532 07:41:59 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:14.532 07:41:59 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:14.532 07:41:59 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.532 07:41:59 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:14.532 07:41:59 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.532 07:41:59 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:14.532 07:41:59 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:14.532 07:41:59 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:14.791 07:41:59 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.791 07:41:59 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.791 07:41:59 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.791 07:41:59 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.791 07:41:59 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.791 07:41:59 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:14.791 07:41:59 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:14.791 Unsupported workload type: foobar 00:06:14.791 [2024-07-15 07:41:59.306460] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:14.791 accel_perf options: 00:06:14.791 [-h help message] 00:06:14.791 [-q queue depth per core] 00:06:14.791 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:14.791 [-T number of threads per core 00:06:14.791 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:14.791 [-t time in seconds] 00:06:14.791 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:14.791 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:14.791 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:14.791 [-l for compress/decompress workloads, name of uncompressed input file 00:06:14.791 [-S for crc32c workload, use this seed value (default 0) 00:06:14.792 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:14.792 [-f for fill workload, use this BYTE value (default 255) 00:06:14.792 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:14.792 [-y verify result if this switch is on] 00:06:14.792 [-a tasks to allocate per core (default: same value as -q)] 00:06:14.792 Can be used to spread operations across a wider range of memory. 00:06:14.792 07:41:59 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:14.792 07:41:59 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:14.792 07:41:59 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:14.792 07:41:59 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:14.792 00:06:14.792 real 0m0.032s 00:06:14.792 user 0m0.020s 00:06:14.792 sys 0m0.012s 00:06:14.792 07:41:59 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.792 07:41:59 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:14.792 ************************************ 00:06:14.792 END TEST accel_wrong_workload 00:06:14.792 ************************************ 00:06:14.792 Error: writing output failed: Broken pipe 00:06:14.792 07:41:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.792 07:41:59 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:14.792 07:41:59 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:14.792 07:41:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.792 07:41:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.792 ************************************ 00:06:14.792 START TEST accel_negative_buffers 00:06:14.792 ************************************ 00:06:14.792 07:41:59 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:14.792 07:41:59 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:14.792 07:41:59 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:14.792 07:41:59 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:14.792 07:41:59 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.792 07:41:59 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:14.792 07:41:59 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.792 07:41:59 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:14.792 07:41:59 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:14.792 07:41:59 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:14.792 07:41:59 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.792 07:41:59 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.792 07:41:59 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.792 07:41:59 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.792 07:41:59 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.792 07:41:59 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:14.792 07:41:59 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:14.792 -x option must be non-negative. 00:06:14.792 [2024-07-15 07:41:59.405477] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:14.792 accel_perf options: 00:06:14.792 [-h help message] 00:06:14.792 [-q queue depth per core] 00:06:14.792 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:14.792 [-T number of threads per core 00:06:14.792 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:14.792 [-t time in seconds] 00:06:14.792 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:14.792 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:14.792 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:14.792 [-l for compress/decompress workloads, name of uncompressed input file 00:06:14.792 [-S for crc32c workload, use this seed value (default 0) 00:06:14.792 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:14.792 [-f for fill workload, use this BYTE value (default 255) 00:06:14.792 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:14.792 [-y verify result if this switch is on] 00:06:14.792 [-a tasks to allocate per core (default: same value as -q)] 00:06:14.792 Can be used to spread operations across a wider range of memory. 00:06:14.792 07:41:59 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:14.792 07:41:59 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:14.792 07:41:59 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:14.792 07:41:59 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:14.792 00:06:14.792 real 0m0.033s 00:06:14.792 user 0m0.023s 00:06:14.792 sys 0m0.009s 00:06:14.792 07:41:59 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.792 07:41:59 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:14.792 ************************************ 00:06:14.792 END TEST accel_negative_buffers 00:06:14.792 ************************************ 00:06:14.792 Error: writing output failed: Broken pipe 00:06:14.792 07:41:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.792 07:41:59 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:14.792 07:41:59 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:14.792 07:41:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.792 07:41:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.792 ************************************ 00:06:14.792 START TEST accel_crc32c 00:06:14.792 ************************************ 00:06:14.792 07:41:59 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:14.792 07:41:59 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:14.792 07:41:59 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:14.792 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.792 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.792 07:41:59 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:14.792 07:41:59 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:14.792 07:41:59 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:14.792 07:41:59 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.792 07:41:59 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.792 07:41:59 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.792 07:41:59 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.792 07:41:59 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.792 07:41:59 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:14.792 07:41:59 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:14.792 [2024-07-15 07:41:59.499536] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:14.792 [2024-07-15 07:41:59.499582] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3084270 ] 00:06:14.792 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.051 [2024-07-15 07:41:59.566743] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.051 [2024-07-15 07:41:59.640384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.051 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.052 07:41:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:15.052 07:41:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.052 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.052 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.052 07:41:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.052 07:41:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.052 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.052 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.052 07:41:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.052 07:41:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.052 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.052 07:41:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:16.427 07:42:00 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.427 00:06:16.427 real 0m1.346s 00:06:16.427 user 0m1.237s 00:06:16.427 sys 0m0.121s 00:06:16.427 07:42:00 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.427 07:42:00 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:16.427 ************************************ 00:06:16.427 END TEST accel_crc32c 00:06:16.427 ************************************ 00:06:16.427 07:42:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.427 07:42:00 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:16.427 07:42:00 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:16.427 07:42:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.427 07:42:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.427 ************************************ 00:06:16.427 START TEST accel_crc32c_C2 00:06:16.427 ************************************ 00:06:16.427 07:42:00 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:16.427 07:42:00 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.427 07:42:00 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:16.427 07:42:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.427 07:42:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.427 07:42:00 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:16.427 07:42:00 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:16.427 07:42:00 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.427 07:42:00 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.427 07:42:00 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.427 07:42:00 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.428 07:42:00 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.428 07:42:00 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.428 07:42:00 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:16.428 07:42:00 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:16.428 [2024-07-15 07:42:00.913365] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:16.428 [2024-07-15 07:42:00.913431] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3084586 ] 00:06:16.428 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.428 [2024-07-15 07:42:00.981181] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.428 [2024-07-15 07:42:01.054928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.428 07:42:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.806 00:06:17.806 real 0m1.351s 00:06:17.806 user 0m1.243s 00:06:17.806 sys 0m0.120s 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.806 07:42:02 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:17.806 ************************************ 00:06:17.806 END TEST accel_crc32c_C2 00:06:17.806 ************************************ 00:06:17.806 07:42:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.806 07:42:02 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:17.806 07:42:02 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:17.806 07:42:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.806 07:42:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.806 ************************************ 00:06:17.806 START TEST accel_copy 00:06:17.806 ************************************ 00:06:17.806 07:42:02 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:17.806 [2024-07-15 07:42:02.329843] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:17.806 [2024-07-15 07:42:02.329890] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3084921 ] 00:06:17.806 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.806 [2024-07-15 07:42:02.398328] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.806 [2024-07-15 07:42:02.472438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.806 07:42:02 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:17.807 07:42:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.807 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.807 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.807 07:42:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.807 07:42:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.807 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.807 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.807 07:42:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.807 07:42:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.807 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.807 07:42:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:19.181 07:42:03 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.181 00:06:19.181 real 0m1.350s 00:06:19.181 user 0m1.245s 00:06:19.181 sys 0m0.118s 00:06:19.181 07:42:03 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.181 07:42:03 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:19.181 ************************************ 00:06:19.182 END TEST accel_copy 00:06:19.182 ************************************ 00:06:19.182 07:42:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.182 07:42:03 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:19.182 07:42:03 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:19.182 07:42:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.182 07:42:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.182 ************************************ 00:06:19.182 START TEST accel_fill 00:06:19.182 ************************************ 00:06:19.182 07:42:03 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:19.182 07:42:03 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:19.182 07:42:03 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:19.182 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.182 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.182 07:42:03 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:19.182 07:42:03 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:19.182 07:42:03 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:19.182 07:42:03 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.182 07:42:03 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.182 07:42:03 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.182 07:42:03 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.182 07:42:03 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.182 07:42:03 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:19.182 07:42:03 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:19.182 [2024-07-15 07:42:03.745647] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:19.182 [2024-07-15 07:42:03.745699] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3085189 ] 00:06:19.182 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.182 [2024-07-15 07:42:03.814684] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.182 [2024-07-15 07:42:03.891201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.182 07:42:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.182 07:42:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.182 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.182 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.182 07:42:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.182 07:42:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.182 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:19.440 07:42:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.441 07:42:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:20.378 07:42:05 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.378 00:06:20.378 real 0m1.353s 00:06:20.378 user 0m1.238s 00:06:20.378 sys 0m0.128s 00:06:20.378 07:42:05 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.378 07:42:05 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:20.378 ************************************ 00:06:20.379 END TEST accel_fill 00:06:20.379 ************************************ 00:06:20.379 07:42:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.379 07:42:05 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:20.379 07:42:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:20.379 07:42:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.379 07:42:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.639 ************************************ 00:06:20.639 START TEST accel_copy_crc32c 00:06:20.639 ************************************ 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:20.639 [2024-07-15 07:42:05.167651] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:20.639 [2024-07-15 07:42:05.167719] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3085466 ] 00:06:20.639 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.639 [2024-07-15 07:42:05.235724] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.639 [2024-07-15 07:42:05.310616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:20.639 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.640 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.640 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.640 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.640 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.640 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.640 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.640 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.640 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.640 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.640 07:42:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.018 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.018 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.018 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.018 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.019 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.019 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.019 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.019 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.019 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.019 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.019 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.019 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.019 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.019 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.019 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:22.019 07:42:06 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.019 00:06:22.019 real 0m1.352s 00:06:22.019 user 0m1.244s 00:06:22.019 sys 0m0.123s 00:06:22.019 07:42:06 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.019 07:42:06 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:22.019 ************************************ 00:06:22.019 END TEST accel_copy_crc32c 00:06:22.019 ************************************ 00:06:22.019 07:42:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.019 07:42:06 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:22.019 07:42:06 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:22.019 07:42:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.019 07:42:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.019 ************************************ 00:06:22.019 START TEST accel_copy_crc32c_C2 00:06:22.019 ************************************ 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:22.019 [2024-07-15 07:42:06.583905] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:22.019 [2024-07-15 07:42:06.583956] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3085737 ] 00:06:22.019 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.019 [2024-07-15 07:42:06.650898] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.019 [2024-07-15 07:42:06.726917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.019 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.277 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.278 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:22.278 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.278 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.278 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.278 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.278 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.278 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.278 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.278 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.278 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.278 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.278 07:42:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.231 00:06:23.231 real 0m1.351s 00:06:23.231 user 0m1.237s 00:06:23.231 sys 0m0.128s 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.231 07:42:07 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:23.231 ************************************ 00:06:23.231 END TEST accel_copy_crc32c_C2 00:06:23.231 ************************************ 00:06:23.231 07:42:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.231 07:42:07 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:23.231 07:42:07 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:23.231 07:42:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.231 07:42:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.231 ************************************ 00:06:23.231 START TEST accel_dualcast 00:06:23.231 ************************************ 00:06:23.231 07:42:07 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:23.231 07:42:07 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:23.231 07:42:07 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:23.231 07:42:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:23.231 07:42:07 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:23.231 07:42:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:23.231 07:42:07 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:23.231 07:42:07 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:23.231 07:42:07 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.231 07:42:07 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.231 07:42:07 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.231 07:42:07 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.231 07:42:07 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.231 07:42:07 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:23.231 07:42:07 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:23.492 [2024-07-15 07:42:07.995250] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:23.492 [2024-07-15 07:42:07.995305] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3086067 ] 00:06:23.492 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.492 [2024-07-15 07:42:08.064480] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.492 [2024-07-15 07:42:08.143846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:23.492 07:42:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.872 07:42:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.872 07:42:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.872 07:42:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.872 07:42:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.873 07:42:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.873 07:42:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.873 07:42:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.873 07:42:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.873 07:42:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.873 07:42:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.873 07:42:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.873 07:42:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.873 07:42:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.873 07:42:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.873 07:42:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.873 07:42:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.873 07:42:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.873 07:42:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.873 07:42:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.873 07:42:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.873 07:42:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.873 07:42:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.873 07:42:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.873 07:42:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.873 07:42:09 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.873 07:42:09 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:24.873 07:42:09 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.873 00:06:24.873 real 0m1.351s 00:06:24.873 user 0m1.238s 00:06:24.873 sys 0m0.125s 00:06:24.873 07:42:09 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.873 07:42:09 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:24.873 ************************************ 00:06:24.873 END TEST accel_dualcast 00:06:24.873 ************************************ 00:06:24.873 07:42:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.873 07:42:09 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:24.873 07:42:09 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:24.873 07:42:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.873 07:42:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.873 ************************************ 00:06:24.873 START TEST accel_compare 00:06:24.873 ************************************ 00:06:24.873 07:42:09 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:24.873 [2024-07-15 07:42:09.413441] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:24.873 [2024-07-15 07:42:09.413488] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3086626 ] 00:06:24.873 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.873 [2024-07-15 07:42:09.466768] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.873 [2024-07-15 07:42:09.543701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.873 07:42:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:26.253 07:42:10 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.253 00:06:26.253 real 0m1.338s 00:06:26.253 user 0m1.238s 00:06:26.253 sys 0m0.112s 00:06:26.253 07:42:10 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.253 07:42:10 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:26.253 ************************************ 00:06:26.253 END TEST accel_compare 00:06:26.253 ************************************ 00:06:26.253 07:42:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.253 07:42:10 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:26.253 07:42:10 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:26.253 07:42:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.253 07:42:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.253 ************************************ 00:06:26.253 START TEST accel_xor 00:06:26.253 ************************************ 00:06:26.253 07:42:10 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:26.253 07:42:10 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:26.253 07:42:10 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:26.253 07:42:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.253 07:42:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.253 07:42:10 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:26.253 07:42:10 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:26.253 07:42:10 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:26.253 07:42:10 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.253 07:42:10 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.253 07:42:10 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.253 07:42:10 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.253 07:42:10 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.253 07:42:10 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:26.253 07:42:10 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:26.253 [2024-07-15 07:42:10.818825] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:26.253 [2024-07-15 07:42:10.818872] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3086933 ] 00:06:26.253 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.253 [2024-07-15 07:42:10.885555] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.253 [2024-07-15 07:42:10.959315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.253 07:42:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.253 07:42:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.253 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.253 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.253 07:42:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.253 07:42:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.253 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.253 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.253 07:42:11 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:26.253 07:42:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.253 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.253 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.253 07:42:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.253 07:42:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.253 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.253 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.253 07:42:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.253 07:42:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.253 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.253 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.512 07:42:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:27.449 07:42:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.449 00:06:27.449 real 0m1.349s 00:06:27.449 user 0m1.247s 00:06:27.449 sys 0m0.115s 00:06:27.449 07:42:12 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.449 07:42:12 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:27.449 ************************************ 00:06:27.449 END TEST accel_xor 00:06:27.449 ************************************ 00:06:27.449 07:42:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.449 07:42:12 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:27.449 07:42:12 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:27.449 07:42:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.449 07:42:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.708 ************************************ 00:06:27.708 START TEST accel_xor 00:06:27.708 ************************************ 00:06:27.708 07:42:12 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:27.708 [2024-07-15 07:42:12.234844] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:27.708 [2024-07-15 07:42:12.234895] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3087217 ] 00:06:27.708 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.708 [2024-07-15 07:42:12.301895] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.708 [2024-07-15 07:42:12.375334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.708 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.709 07:42:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.088 07:42:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.088 07:42:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.088 07:42:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.088 07:42:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.088 07:42:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.088 07:42:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.088 07:42:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.088 07:42:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.088 07:42:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.088 07:42:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.088 07:42:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.088 07:42:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.089 07:42:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.089 07:42:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.089 07:42:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.089 07:42:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.089 07:42:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.089 07:42:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.089 07:42:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.089 07:42:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.089 07:42:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.089 07:42:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.089 07:42:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.089 07:42:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.089 07:42:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.089 07:42:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:29.089 07:42:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.089 00:06:29.089 real 0m1.347s 00:06:29.089 user 0m1.240s 00:06:29.089 sys 0m0.120s 00:06:29.089 07:42:13 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.089 07:42:13 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:29.089 ************************************ 00:06:29.089 END TEST accel_xor 00:06:29.089 ************************************ 00:06:29.089 07:42:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.089 07:42:13 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:29.089 07:42:13 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:29.089 07:42:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.089 07:42:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.089 ************************************ 00:06:29.089 START TEST accel_dif_verify 00:06:29.089 ************************************ 00:06:29.089 07:42:13 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:29.089 [2024-07-15 07:42:13.650640] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:29.089 [2024-07-15 07:42:13.650691] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3087486 ] 00:06:29.089 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.089 [2024-07-15 07:42:13.716608] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.089 [2024-07-15 07:42:13.788679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.089 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.348 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.348 07:42:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:29.348 07:42:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.348 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.348 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.348 07:42:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:29.349 07:42:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.349 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.349 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.349 07:42:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.349 07:42:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.349 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.349 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.349 07:42:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:29.349 07:42:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.349 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.349 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.349 07:42:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.349 07:42:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.349 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.349 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.349 07:42:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.349 07:42:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.349 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.349 07:42:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:30.287 07:42:14 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.287 00:06:30.287 real 0m1.344s 00:06:30.287 user 0m1.238s 00:06:30.287 sys 0m0.120s 00:06:30.287 07:42:14 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.287 07:42:14 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:30.287 ************************************ 00:06:30.287 END TEST accel_dif_verify 00:06:30.287 ************************************ 00:06:30.287 07:42:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.287 07:42:15 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:30.287 07:42:15 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:30.288 07:42:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.288 07:42:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.288 ************************************ 00:06:30.288 START TEST accel_dif_generate 00:06:30.288 ************************************ 00:06:30.288 07:42:15 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:30.288 07:42:15 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:30.288 07:42:15 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:30.288 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.547 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.547 07:42:15 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:30.547 07:42:15 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:30.547 07:42:15 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:30.548 [2024-07-15 07:42:15.064174] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:30.548 [2024-07-15 07:42:15.064223] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3087741 ] 00:06:30.548 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.548 [2024-07-15 07:42:15.131372] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.548 [2024-07-15 07:42:15.203341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.548 07:42:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.926 07:42:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.926 07:42:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.926 07:42:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.926 07:42:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.926 07:42:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.926 07:42:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.926 07:42:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.926 07:42:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.926 07:42:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.926 07:42:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.926 07:42:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.926 07:42:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.926 07:42:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.926 07:42:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.926 07:42:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.926 07:42:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.927 07:42:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.927 07:42:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.927 07:42:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.927 07:42:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.927 07:42:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.927 07:42:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.927 07:42:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.927 07:42:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.927 07:42:16 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.927 07:42:16 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:31.927 07:42:16 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.927 00:06:31.927 real 0m1.347s 00:06:31.927 user 0m1.236s 00:06:31.927 sys 0m0.125s 00:06:31.927 07:42:16 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.927 07:42:16 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:31.927 ************************************ 00:06:31.927 END TEST accel_dif_generate 00:06:31.927 ************************************ 00:06:31.927 07:42:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.927 07:42:16 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:31.927 07:42:16 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:31.927 07:42:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.927 07:42:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.927 ************************************ 00:06:31.927 START TEST accel_dif_generate_copy 00:06:31.927 ************************************ 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:31.927 [2024-07-15 07:42:16.478716] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:31.927 [2024-07-15 07:42:16.478782] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3087995 ] 00:06:31.927 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.927 [2024-07-15 07:42:16.528787] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.927 [2024-07-15 07:42:16.601174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.927 07:42:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.306 00:06:33.306 real 0m1.332s 00:06:33.306 user 0m1.233s 00:06:33.306 sys 0m0.112s 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.306 07:42:17 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:33.306 ************************************ 00:06:33.306 END TEST accel_dif_generate_copy 00:06:33.306 ************************************ 00:06:33.306 07:42:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.306 07:42:17 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:33.306 07:42:17 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.306 07:42:17 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:33.306 07:42:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.306 07:42:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.306 ************************************ 00:06:33.306 START TEST accel_comp 00:06:33.306 ************************************ 00:06:33.306 07:42:17 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.306 07:42:17 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:33.306 07:42:17 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:33.306 07:42:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.306 07:42:17 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.306 07:42:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.306 07:42:17 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.306 07:42:17 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:33.306 07:42:17 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.306 07:42:17 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.306 07:42:17 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.306 07:42:17 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.306 07:42:17 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.306 07:42:17 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:33.306 07:42:17 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:33.306 [2024-07-15 07:42:17.873355] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:33.306 [2024-07-15 07:42:17.873412] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3088242 ] 00:06:33.306 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.306 [2024-07-15 07:42:17.941605] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.306 [2024-07-15 07:42:18.014664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.306 07:42:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:33.306 07:42:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.306 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.306 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.565 07:42:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:33.565 07:42:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.565 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.565 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.565 07:42:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:33.565 07:42:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.565 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.565 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.565 07:42:18 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:33.565 07:42:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.565 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.565 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.565 07:42:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:33.565 07:42:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.565 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.566 07:42:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:34.501 07:42:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.501 00:06:34.501 real 0m1.348s 00:06:34.501 user 0m1.239s 00:06:34.501 sys 0m0.123s 00:06:34.501 07:42:19 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.501 07:42:19 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:34.501 ************************************ 00:06:34.501 END TEST accel_comp 00:06:34.501 ************************************ 00:06:34.501 07:42:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.501 07:42:19 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.501 07:42:19 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:34.501 07:42:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.501 07:42:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.760 ************************************ 00:06:34.760 START TEST accel_decomp 00:06:34.760 ************************************ 00:06:34.760 07:42:19 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:34.760 [2024-07-15 07:42:19.290138] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:34.760 [2024-07-15 07:42:19.290185] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3088493 ] 00:06:34.760 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.760 [2024-07-15 07:42:19.356765] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.760 [2024-07-15 07:42:19.428958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.760 07:42:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:36.132 07:42:20 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.132 00:06:36.132 real 0m1.349s 00:06:36.132 user 0m1.241s 00:06:36.132 sys 0m0.122s 00:06:36.132 07:42:20 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.132 07:42:20 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:36.132 ************************************ 00:06:36.132 END TEST accel_decomp 00:06:36.132 ************************************ 00:06:36.132 07:42:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.132 07:42:20 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:36.132 07:42:20 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:36.132 07:42:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.132 07:42:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.132 ************************************ 00:06:36.132 START TEST accel_decomp_full 00:06:36.132 ************************************ 00:06:36.132 07:42:20 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:36.132 07:42:20 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:36.132 07:42:20 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:36.132 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.132 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.132 07:42:20 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:36.132 07:42:20 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:36.132 07:42:20 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:36.132 07:42:20 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.132 07:42:20 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.132 07:42:20 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.132 07:42:20 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.132 07:42:20 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.132 07:42:20 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:36.132 07:42:20 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:36.132 [2024-07-15 07:42:20.704905] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:36.132 [2024-07-15 07:42:20.704958] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3088740 ] 00:06:36.132 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.132 [2024-07-15 07:42:20.772906] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.132 [2024-07-15 07:42:20.845771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.389 07:42:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:37.324 07:42:22 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.324 00:06:37.324 real 0m1.359s 00:06:37.324 user 0m1.243s 00:06:37.324 sys 0m0.129s 00:06:37.324 07:42:22 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.324 07:42:22 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:37.324 ************************************ 00:06:37.324 END TEST accel_decomp_full 00:06:37.324 ************************************ 00:06:37.324 07:42:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:37.324 07:42:22 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:37.324 07:42:22 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:37.324 07:42:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.324 07:42:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.584 ************************************ 00:06:37.584 START TEST accel_decomp_mcore 00:06:37.584 ************************************ 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:37.584 [2024-07-15 07:42:22.130099] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:37.584 [2024-07-15 07:42:22.130164] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3088992 ] 00:06:37.584 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.584 [2024-07-15 07:42:22.197718] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:37.584 [2024-07-15 07:42:22.273195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.584 [2024-07-15 07:42:22.273315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.584 [2024-07-15 07:42:22.273347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.584 [2024-07-15 07:42:22.273348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.584 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.585 07:42:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.960 00:06:38.960 real 0m1.360s 00:06:38.960 user 0m4.568s 00:06:38.960 sys 0m0.133s 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.960 07:42:23 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:38.960 ************************************ 00:06:38.960 END TEST accel_decomp_mcore 00:06:38.960 ************************************ 00:06:38.960 07:42:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.960 07:42:23 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:38.960 07:42:23 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:38.960 07:42:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.960 07:42:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.960 ************************************ 00:06:38.960 START TEST accel_decomp_full_mcore 00:06:38.960 ************************************ 00:06:38.960 07:42:23 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:38.960 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:38.960 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:38.960 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.960 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.960 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:38.960 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:38.960 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:38.960 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.960 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.960 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.960 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.960 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.960 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:38.960 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:38.960 [2024-07-15 07:42:23.555162] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:38.960 [2024-07-15 07:42:23.555209] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3089249 ] 00:06:38.960 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.960 [2024-07-15 07:42:23.621692] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:38.960 [2024-07-15 07:42:23.696928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.960 [2024-07-15 07:42:23.697039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.960 [2024-07-15 07:42:23.697143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.960 [2024-07-15 07:42:23.697144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.219 07:42:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.155 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.155 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.155 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.155 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.155 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.155 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.155 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.155 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.155 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.155 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.155 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.155 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.155 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.155 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.155 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.155 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.155 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.155 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.155 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.156 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.156 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.156 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.156 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.156 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.156 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.156 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.156 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.156 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.156 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.156 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.156 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.156 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.156 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.156 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.156 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.156 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.156 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.156 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:40.156 07:42:24 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.156 00:06:40.156 real 0m1.372s 00:06:40.156 user 0m4.621s 00:06:40.156 sys 0m0.132s 00:06:40.156 07:42:24 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.156 07:42:24 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:40.156 ************************************ 00:06:40.156 END TEST accel_decomp_full_mcore 00:06:40.156 ************************************ 00:06:40.415 07:42:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.415 07:42:24 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:40.415 07:42:24 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:40.415 07:42:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.415 07:42:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.415 ************************************ 00:06:40.415 START TEST accel_decomp_mthread 00:06:40.415 ************************************ 00:06:40.415 07:42:24 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:40.415 07:42:24 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:40.415 07:42:24 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:40.415 07:42:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.415 07:42:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.415 07:42:24 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:40.415 07:42:24 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:40.415 07:42:24 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:40.415 07:42:24 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.415 07:42:24 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.415 07:42:24 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.415 07:42:24 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.415 07:42:24 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.415 07:42:24 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:40.415 07:42:24 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:40.415 [2024-07-15 07:42:24.997627] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:40.415 [2024-07-15 07:42:24.997694] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3089497 ] 00:06:40.415 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.415 [2024-07-15 07:42:25.068182] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.415 [2024-07-15 07:42:25.139091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.675 07:42:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.614 00:06:41.614 real 0m1.357s 00:06:41.614 user 0m1.242s 00:06:41.614 sys 0m0.128s 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.614 07:42:26 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:41.614 ************************************ 00:06:41.614 END TEST accel_decomp_mthread 00:06:41.614 ************************************ 00:06:41.614 07:42:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:41.614 07:42:26 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:41.614 07:42:26 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:41.614 07:42:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.614 07:42:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.903 ************************************ 00:06:41.903 START TEST accel_decomp_full_mthread 00:06:41.903 ************************************ 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:41.903 [2024-07-15 07:42:26.429179] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:41.903 [2024-07-15 07:42:26.429247] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3089753 ] 00:06:41.903 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.903 [2024-07-15 07:42:26.498689] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.903 [2024-07-15 07:42:26.572090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:41.903 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.904 07:42:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.286 00:06:43.286 real 0m1.383s 00:06:43.286 user 0m1.271s 00:06:43.286 sys 0m0.125s 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.286 07:42:27 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:43.286 ************************************ 00:06:43.286 END TEST accel_decomp_full_mthread 00:06:43.286 ************************************ 00:06:43.286 07:42:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.286 07:42:27 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:43.286 07:42:27 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:43.286 07:42:27 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:43.286 07:42:27 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:43.286 07:42:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.286 07:42:27 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.286 07:42:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.286 07:42:27 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.286 07:42:27 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.286 07:42:27 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.286 07:42:27 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.286 07:42:27 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:43.286 07:42:27 accel -- accel/accel.sh@41 -- # jq -r . 00:06:43.286 ************************************ 00:06:43.286 START TEST accel_dif_functional_tests 00:06:43.286 ************************************ 00:06:43.286 07:42:27 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:43.286 [2024-07-15 07:42:27.889212] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:43.286 [2024-07-15 07:42:27.889262] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3090001 ] 00:06:43.286 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.286 [2024-07-15 07:42:27.956161] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.286 [2024-07-15 07:42:28.029648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.286 [2024-07-15 07:42:28.029756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.286 [2024-07-15 07:42:28.029756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.546 00:06:43.546 00:06:43.546 CUnit - A unit testing framework for C - Version 2.1-3 00:06:43.546 http://cunit.sourceforge.net/ 00:06:43.546 00:06:43.546 00:06:43.546 Suite: accel_dif 00:06:43.546 Test: verify: DIF generated, GUARD check ...passed 00:06:43.546 Test: verify: DIF generated, APPTAG check ...passed 00:06:43.546 Test: verify: DIF generated, REFTAG check ...passed 00:06:43.546 Test: verify: DIF not generated, GUARD check ...[2024-07-15 07:42:28.098261] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:43.546 passed 00:06:43.546 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 07:42:28.098308] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:43.546 passed 00:06:43.546 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 07:42:28.098326] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:43.546 passed 00:06:43.546 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:43.546 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 07:42:28.098368] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:43.546 passed 00:06:43.546 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:43.546 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:43.546 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:43.546 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 07:42:28.098466] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:43.546 passed 00:06:43.546 Test: verify copy: DIF generated, GUARD check ...passed 00:06:43.546 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:43.546 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:43.546 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 07:42:28.098568] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:43.546 passed 00:06:43.546 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 07:42:28.098590] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:43.546 passed 00:06:43.546 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 07:42:28.098610] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:43.546 passed 00:06:43.546 Test: generate copy: DIF generated, GUARD check ...passed 00:06:43.546 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:43.546 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:43.546 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:43.546 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:43.546 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:43.546 Test: generate copy: iovecs-len validate ...[2024-07-15 07:42:28.098770] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:43.546 passed 00:06:43.546 Test: generate copy: buffer alignment validate ...passed 00:06:43.546 00:06:43.546 Run Summary: Type Total Ran Passed Failed Inactive 00:06:43.546 suites 1 1 n/a 0 0 00:06:43.546 tests 26 26 26 0 0 00:06:43.546 asserts 115 115 115 0 n/a 00:06:43.546 00:06:43.546 Elapsed time = 0.002 seconds 00:06:43.546 00:06:43.546 real 0m0.422s 00:06:43.546 user 0m0.617s 00:06:43.546 sys 0m0.158s 00:06:43.546 07:42:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.546 07:42:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:43.546 ************************************ 00:06:43.546 END TEST accel_dif_functional_tests 00:06:43.546 ************************************ 00:06:43.546 07:42:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.805 00:06:43.805 real 0m31.312s 00:06:43.805 user 0m34.902s 00:06:43.805 sys 0m4.480s 00:06:43.805 07:42:28 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.805 07:42:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.805 ************************************ 00:06:43.805 END TEST accel 00:06:43.805 ************************************ 00:06:43.805 07:42:28 -- common/autotest_common.sh@1142 -- # return 0 00:06:43.805 07:42:28 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:43.805 07:42:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.805 07:42:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.805 07:42:28 -- common/autotest_common.sh@10 -- # set +x 00:06:43.805 ************************************ 00:06:43.805 START TEST accel_rpc 00:06:43.805 ************************************ 00:06:43.805 07:42:28 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:43.805 * Looking for test storage... 00:06:43.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:43.805 07:42:28 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:43.805 07:42:28 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3090091 00:06:43.805 07:42:28 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3090091 00:06:43.805 07:42:28 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:43.805 07:42:28 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 3090091 ']' 00:06:43.805 07:42:28 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.805 07:42:28 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.805 07:42:28 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.805 07:42:28 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.805 07:42:28 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.805 [2024-07-15 07:42:28.509772] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:43.805 [2024-07-15 07:42:28.509819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3090091 ] 00:06:43.805 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.064 [2024-07-15 07:42:28.576043] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.064 [2024-07-15 07:42:28.655483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.633 07:42:29 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.633 07:42:29 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:44.633 07:42:29 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:44.633 07:42:29 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:44.633 07:42:29 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:44.633 07:42:29 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:44.633 07:42:29 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:44.633 07:42:29 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.633 07:42:29 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.633 07:42:29 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.633 ************************************ 00:06:44.633 START TEST accel_assign_opcode 00:06:44.633 ************************************ 00:06:44.633 07:42:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:44.633 07:42:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:44.633 07:42:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.633 07:42:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:44.633 [2024-07-15 07:42:29.337491] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:44.633 07:42:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.633 07:42:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:44.633 07:42:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.633 07:42:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:44.633 [2024-07-15 07:42:29.345506] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:44.633 07:42:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.633 07:42:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:44.633 07:42:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.633 07:42:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:44.892 07:42:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.892 07:42:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:44.892 07:42:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:44.892 07:42:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.892 07:42:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:44.892 07:42:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:44.892 07:42:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.892 software 00:06:44.892 00:06:44.892 real 0m0.244s 00:06:44.892 user 0m0.044s 00:06:44.892 sys 0m0.013s 00:06:44.892 07:42:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.892 07:42:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:44.892 ************************************ 00:06:44.892 END TEST accel_assign_opcode 00:06:44.892 ************************************ 00:06:44.892 07:42:29 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:44.892 07:42:29 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3090091 00:06:44.892 07:42:29 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 3090091 ']' 00:06:44.892 07:42:29 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 3090091 00:06:44.892 07:42:29 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:44.892 07:42:29 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:44.892 07:42:29 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3090091 00:06:45.152 07:42:29 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:45.152 07:42:29 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:45.152 07:42:29 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3090091' 00:06:45.152 killing process with pid 3090091 00:06:45.152 07:42:29 accel_rpc -- common/autotest_common.sh@967 -- # kill 3090091 00:06:45.152 07:42:29 accel_rpc -- common/autotest_common.sh@972 -- # wait 3090091 00:06:45.411 00:06:45.411 real 0m1.592s 00:06:45.411 user 0m1.654s 00:06:45.411 sys 0m0.430s 00:06:45.411 07:42:29 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.411 07:42:29 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.411 ************************************ 00:06:45.411 END TEST accel_rpc 00:06:45.411 ************************************ 00:06:45.411 07:42:29 -- common/autotest_common.sh@1142 -- # return 0 00:06:45.411 07:42:29 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:45.411 07:42:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.411 07:42:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.411 07:42:29 -- common/autotest_common.sh@10 -- # set +x 00:06:45.411 ************************************ 00:06:45.411 START TEST app_cmdline 00:06:45.411 ************************************ 00:06:45.411 07:42:30 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:45.411 * Looking for test storage... 00:06:45.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:45.411 07:42:30 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:45.411 07:42:30 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3090570 00:06:45.411 07:42:30 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3090570 00:06:45.411 07:42:30 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:45.411 07:42:30 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 3090570 ']' 00:06:45.411 07:42:30 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.411 07:42:30 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.411 07:42:30 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.411 07:42:30 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.411 07:42:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:45.671 [2024-07-15 07:42:30.175496] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:06:45.671 [2024-07-15 07:42:30.175548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3090570 ] 00:06:45.671 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.671 [2024-07-15 07:42:30.242369] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.671 [2024-07-15 07:42:30.314614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.239 07:42:30 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.239 07:42:30 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:46.239 07:42:30 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:46.498 { 00:06:46.498 "version": "SPDK v24.09-pre git sha1 897e912d5", 00:06:46.498 "fields": { 00:06:46.498 "major": 24, 00:06:46.498 "minor": 9, 00:06:46.498 "patch": 0, 00:06:46.498 "suffix": "-pre", 00:06:46.498 "commit": "897e912d5" 00:06:46.498 } 00:06:46.498 } 00:06:46.498 07:42:31 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:46.498 07:42:31 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:46.498 07:42:31 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:46.498 07:42:31 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:46.498 07:42:31 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:46.498 07:42:31 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:46.498 07:42:31 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:46.498 07:42:31 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.498 07:42:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:46.498 07:42:31 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.498 07:42:31 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:46.498 07:42:31 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:46.498 07:42:31 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.498 07:42:31 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:46.498 07:42:31 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.499 07:42:31 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.499 07:42:31 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.499 07:42:31 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.499 07:42:31 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.499 07:42:31 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.499 07:42:31 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.499 07:42:31 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.499 07:42:31 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:46.499 07:42:31 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.757 request: 00:06:46.757 { 00:06:46.757 "method": "env_dpdk_get_mem_stats", 00:06:46.757 "req_id": 1 00:06:46.757 } 00:06:46.757 Got JSON-RPC error response 00:06:46.757 response: 00:06:46.757 { 00:06:46.757 "code": -32601, 00:06:46.757 "message": "Method not found" 00:06:46.757 } 00:06:46.757 07:42:31 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:46.757 07:42:31 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:46.757 07:42:31 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:46.757 07:42:31 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:46.757 07:42:31 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3090570 00:06:46.757 07:42:31 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 3090570 ']' 00:06:46.758 07:42:31 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 3090570 00:06:46.758 07:42:31 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:46.758 07:42:31 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:46.758 07:42:31 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3090570 00:06:46.758 07:42:31 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:46.758 07:42:31 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:46.758 07:42:31 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3090570' 00:06:46.758 killing process with pid 3090570 00:06:46.758 07:42:31 app_cmdline -- common/autotest_common.sh@967 -- # kill 3090570 00:06:46.758 07:42:31 app_cmdline -- common/autotest_common.sh@972 -- # wait 3090570 00:06:47.016 00:06:47.016 real 0m1.694s 00:06:47.016 user 0m2.021s 00:06:47.016 sys 0m0.442s 00:06:47.016 07:42:31 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.016 07:42:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:47.016 ************************************ 00:06:47.016 END TEST app_cmdline 00:06:47.016 ************************************ 00:06:47.016 07:42:31 -- common/autotest_common.sh@1142 -- # return 0 00:06:47.016 07:42:31 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:47.016 07:42:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.016 07:42:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.016 07:42:31 -- common/autotest_common.sh@10 -- # set +x 00:06:47.276 ************************************ 00:06:47.276 START TEST version 00:06:47.276 ************************************ 00:06:47.276 07:42:31 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:47.276 * Looking for test storage... 00:06:47.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:47.276 07:42:31 version -- app/version.sh@17 -- # get_header_version major 00:06:47.276 07:42:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:47.276 07:42:31 version -- app/version.sh@14 -- # cut -f2 00:06:47.276 07:42:31 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.276 07:42:31 version -- app/version.sh@17 -- # major=24 00:06:47.276 07:42:31 version -- app/version.sh@18 -- # get_header_version minor 00:06:47.276 07:42:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:47.276 07:42:31 version -- app/version.sh@14 -- # cut -f2 00:06:47.276 07:42:31 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.276 07:42:31 version -- app/version.sh@18 -- # minor=9 00:06:47.276 07:42:31 version -- app/version.sh@19 -- # get_header_version patch 00:06:47.276 07:42:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:47.276 07:42:31 version -- app/version.sh@14 -- # cut -f2 00:06:47.276 07:42:31 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.276 07:42:31 version -- app/version.sh@19 -- # patch=0 00:06:47.276 07:42:31 version -- app/version.sh@20 -- # get_header_version suffix 00:06:47.276 07:42:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:47.276 07:42:31 version -- app/version.sh@14 -- # cut -f2 00:06:47.276 07:42:31 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.276 07:42:31 version -- app/version.sh@20 -- # suffix=-pre 00:06:47.276 07:42:31 version -- app/version.sh@22 -- # version=24.9 00:06:47.276 07:42:31 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:47.276 07:42:31 version -- app/version.sh@28 -- # version=24.9rc0 00:06:47.276 07:42:31 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:47.276 07:42:31 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:47.276 07:42:31 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:47.276 07:42:31 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:47.276 00:06:47.276 real 0m0.159s 00:06:47.276 user 0m0.087s 00:06:47.276 sys 0m0.110s 00:06:47.276 07:42:31 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.276 07:42:31 version -- common/autotest_common.sh@10 -- # set +x 00:06:47.276 ************************************ 00:06:47.276 END TEST version 00:06:47.276 ************************************ 00:06:47.276 07:42:31 -- common/autotest_common.sh@1142 -- # return 0 00:06:47.276 07:42:31 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:47.276 07:42:31 -- spdk/autotest.sh@198 -- # uname -s 00:06:47.276 07:42:31 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:47.276 07:42:32 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:47.276 07:42:32 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:47.276 07:42:32 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:47.276 07:42:32 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:47.276 07:42:32 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:47.276 07:42:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:47.276 07:42:32 -- common/autotest_common.sh@10 -- # set +x 00:06:47.536 07:42:32 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:47.536 07:42:32 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:47.536 07:42:32 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:47.536 07:42:32 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:47.536 07:42:32 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:47.536 07:42:32 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:47.536 07:42:32 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:47.536 07:42:32 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:47.536 07:42:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.536 07:42:32 -- common/autotest_common.sh@10 -- # set +x 00:06:47.536 ************************************ 00:06:47.536 START TEST nvmf_tcp 00:06:47.536 ************************************ 00:06:47.536 07:42:32 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:47.536 * Looking for test storage... 00:06:47.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:47.536 07:42:32 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.536 07:42:32 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.536 07:42:32 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.536 07:42:32 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.536 07:42:32 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.536 07:42:32 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.536 07:42:32 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:47.536 07:42:32 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:47.536 07:42:32 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:47.536 07:42:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:47.536 07:42:32 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:47.536 07:42:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:47.536 07:42:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.536 07:42:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.536 ************************************ 00:06:47.536 START TEST nvmf_example 00:06:47.536 ************************************ 00:06:47.536 07:42:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:47.796 * Looking for test storage... 00:06:47.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:47.796 07:42:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:47.796 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:47.796 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.796 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.796 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.796 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.796 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.796 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.796 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.796 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.796 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.796 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.796 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:47.796 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:47.796 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.796 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.796 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:47.796 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.796 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:47.796 07:42:32 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.796 07:42:32 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.796 07:42:32 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.796 07:42:32 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:47.797 07:42:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:54.369 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:54.369 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:54.369 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:54.369 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:54.369 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:54.369 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:54.369 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:54.369 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:54.370 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:54.370 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:54.370 Found net devices under 0000:86:00.0: cvl_0_0 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:54.370 Found net devices under 0000:86:00.1: cvl_0_1 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:54.370 07:42:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:54.370 07:42:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:54.370 07:42:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:54.370 07:42:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:54.370 07:42:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:54.370 07:42:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:54.370 07:42:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:54.370 07:42:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:54.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:54.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:06:54.370 00:06:54.370 --- 10.0.0.2 ping statistics --- 00:06:54.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.371 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:54.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:54.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:06:54.371 00:06:54.371 --- 10.0.0.1 ping statistics --- 00:06:54.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.371 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3093998 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3093998 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 3093998 ']' 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.371 07:42:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:54.371 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.371 07:42:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.371 07:42:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:54.371 07:42:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:54.371 07:42:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:54.371 07:42:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:54.631 07:42:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:54.631 07:42:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.631 07:42:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:54.631 07:42:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.631 07:42:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:54.631 07:42:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.631 07:42:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:54.631 07:42:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.631 07:42:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:54.631 07:42:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:54.631 07:42:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.631 07:42:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:54.631 07:42:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.631 07:42:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:54.631 07:42:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:54.631 07:42:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.631 07:42:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:54.631 07:42:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.631 07:42:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:54.631 07:42:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.631 07:42:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:54.631 07:42:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.631 07:42:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:54.631 07:42:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:54.631 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.838 Initializing NVMe Controllers 00:07:06.838 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:06.838 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:06.838 Initialization complete. Launching workers. 00:07:06.838 ======================================================== 00:07:06.838 Latency(us) 00:07:06.838 Device Information : IOPS MiB/s Average min max 00:07:06.838 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18230.70 71.21 3511.84 512.87 15998.64 00:07:06.838 ======================================================== 00:07:06.838 Total : 18230.70 71.21 3511.84 512.87 15998.64 00:07:06.838 00:07:06.838 07:42:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:06.838 07:42:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:06.838 07:42:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:06.838 07:42:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:06.838 07:42:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:06.838 07:42:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:06.838 07:42:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:06.838 07:42:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:06.838 rmmod nvme_tcp 00:07:06.838 rmmod nvme_fabrics 00:07:06.838 rmmod nvme_keyring 00:07:06.838 07:42:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:06.838 07:42:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:06.839 07:42:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:06.839 07:42:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3093998 ']' 00:07:06.839 07:42:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3093998 00:07:06.839 07:42:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 3093998 ']' 00:07:06.839 07:42:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 3093998 00:07:06.839 07:42:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:06.839 07:42:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:06.839 07:42:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3093998 00:07:06.839 07:42:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:06.839 07:42:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:06.839 07:42:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3093998' 00:07:06.839 killing process with pid 3093998 00:07:06.839 07:42:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 3093998 00:07:06.839 07:42:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 3093998 00:07:06.839 nvmf threads initialize successfully 00:07:06.839 bdev subsystem init successfully 00:07:06.839 created a nvmf target service 00:07:06.839 create targets's poll groups done 00:07:06.839 all subsystems of target started 00:07:06.839 nvmf target is running 00:07:06.839 all subsystems of target stopped 00:07:06.839 destroy targets's poll groups done 00:07:06.839 destroyed the nvmf target service 00:07:06.839 bdev subsystem finish successfully 00:07:06.839 nvmf threads destroy successfully 00:07:06.839 07:42:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:06.839 07:42:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:06.839 07:42:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:06.839 07:42:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:06.839 07:42:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:06.839 07:42:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.839 07:42:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:06.839 07:42:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.099 07:42:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:07.099 07:42:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:07.099 07:42:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:07.099 07:42:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:07.099 00:07:07.099 real 0m19.594s 00:07:07.099 user 0m45.905s 00:07:07.099 sys 0m5.875s 00:07:07.099 07:42:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.099 07:42:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:07.099 ************************************ 00:07:07.099 END TEST nvmf_example 00:07:07.099 ************************************ 00:07:07.361 07:42:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:07.361 07:42:51 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:07.361 07:42:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:07.361 07:42:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.361 07:42:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:07.361 ************************************ 00:07:07.361 START TEST nvmf_filesystem 00:07:07.361 ************************************ 00:07:07.361 07:42:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:07.361 * Looking for test storage... 00:07:07.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.361 07:42:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:07.361 07:42:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:07.361 07:42:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:07.361 07:42:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:07.361 07:42:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:07.361 07:42:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:07.361 07:42:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:07.361 07:42:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:07.361 07:42:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:07.362 07:42:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:07.362 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:07.363 #define SPDK_CONFIG_H 00:07:07.363 #define SPDK_CONFIG_APPS 1 00:07:07.363 #define SPDK_CONFIG_ARCH native 00:07:07.363 #undef SPDK_CONFIG_ASAN 00:07:07.363 #undef SPDK_CONFIG_AVAHI 00:07:07.363 #undef SPDK_CONFIG_CET 00:07:07.363 #define SPDK_CONFIG_COVERAGE 1 00:07:07.363 #define SPDK_CONFIG_CROSS_PREFIX 00:07:07.363 #undef SPDK_CONFIG_CRYPTO 00:07:07.363 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:07.363 #undef SPDK_CONFIG_CUSTOMOCF 00:07:07.363 #undef SPDK_CONFIG_DAOS 00:07:07.363 #define SPDK_CONFIG_DAOS_DIR 00:07:07.363 #define SPDK_CONFIG_DEBUG 1 00:07:07.363 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:07.363 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:07.363 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:07.363 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:07.363 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:07.363 #undef SPDK_CONFIG_DPDK_UADK 00:07:07.363 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:07.363 #define SPDK_CONFIG_EXAMPLES 1 00:07:07.363 #undef SPDK_CONFIG_FC 00:07:07.363 #define SPDK_CONFIG_FC_PATH 00:07:07.363 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:07.363 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:07.363 #undef SPDK_CONFIG_FUSE 00:07:07.363 #undef SPDK_CONFIG_FUZZER 00:07:07.363 #define SPDK_CONFIG_FUZZER_LIB 00:07:07.363 #undef SPDK_CONFIG_GOLANG 00:07:07.363 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:07.363 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:07.363 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:07.363 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:07.363 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:07.363 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:07.363 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:07.363 #define SPDK_CONFIG_IDXD 1 00:07:07.363 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:07.363 #undef SPDK_CONFIG_IPSEC_MB 00:07:07.363 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:07.363 #define SPDK_CONFIG_ISAL 1 00:07:07.363 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:07.363 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:07.363 #define SPDK_CONFIG_LIBDIR 00:07:07.363 #undef SPDK_CONFIG_LTO 00:07:07.363 #define SPDK_CONFIG_MAX_LCORES 128 00:07:07.363 #define SPDK_CONFIG_NVME_CUSE 1 00:07:07.363 #undef SPDK_CONFIG_OCF 00:07:07.363 #define SPDK_CONFIG_OCF_PATH 00:07:07.363 #define SPDK_CONFIG_OPENSSL_PATH 00:07:07.363 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:07.363 #define SPDK_CONFIG_PGO_DIR 00:07:07.363 #undef SPDK_CONFIG_PGO_USE 00:07:07.363 #define SPDK_CONFIG_PREFIX /usr/local 00:07:07.363 #undef SPDK_CONFIG_RAID5F 00:07:07.363 #undef SPDK_CONFIG_RBD 00:07:07.363 #define SPDK_CONFIG_RDMA 1 00:07:07.363 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:07.363 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:07.363 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:07.363 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:07.363 #define SPDK_CONFIG_SHARED 1 00:07:07.363 #undef SPDK_CONFIG_SMA 00:07:07.363 #define SPDK_CONFIG_TESTS 1 00:07:07.363 #undef SPDK_CONFIG_TSAN 00:07:07.363 #define SPDK_CONFIG_UBLK 1 00:07:07.363 #define SPDK_CONFIG_UBSAN 1 00:07:07.363 #undef SPDK_CONFIG_UNIT_TESTS 00:07:07.363 #undef SPDK_CONFIG_URING 00:07:07.363 #define SPDK_CONFIG_URING_PATH 00:07:07.363 #undef SPDK_CONFIG_URING_ZNS 00:07:07.363 #undef SPDK_CONFIG_USDT 00:07:07.363 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:07.363 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:07.363 #define SPDK_CONFIG_VFIO_USER 1 00:07:07.363 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:07.363 #define SPDK_CONFIG_VHOST 1 00:07:07.363 #define SPDK_CONFIG_VIRTIO 1 00:07:07.363 #undef SPDK_CONFIG_VTUNE 00:07:07.363 #define SPDK_CONFIG_VTUNE_DIR 00:07:07.363 #define SPDK_CONFIG_WERROR 1 00:07:07.363 #define SPDK_CONFIG_WPDK_DIR 00:07:07.363 #undef SPDK_CONFIG_XNVME 00:07:07.363 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:07.363 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:07.364 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:07.365 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3096408 ]] 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3096408 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.VNHvnj 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.VNHvnj/tests/target /tmp/spdk.VNHvnj 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:07.366 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=950202368 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4334227456 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=189573857280 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974299648 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6400442368 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97977229312 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9920512 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185485824 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194861568 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9375744 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97986629632 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=520192 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597422592 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597426688 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:07.627 * Looking for test storage... 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=189573857280 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8615034880 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.627 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:07.628 07:42:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:14.200 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:14.200 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:14.200 Found net devices under 0000:86:00.0: cvl_0_0 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:14.200 Found net devices under 0000:86:00.1: cvl_0_1 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:14.200 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:14.201 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:14.201 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:14.201 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:14.201 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:14.201 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:14.201 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:14.201 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:14.201 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:14.201 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:14.201 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:14.201 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:14.201 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:14.201 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:14.201 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:14.201 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:14.201 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:14.201 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:14.201 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:14.201 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:14.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:14.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:07:14.201 00:07:14.201 --- 10.0.0.2 ping statistics --- 00:07:14.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.201 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:07:14.201 07:42:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:14.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:14.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:07:14.201 00:07:14.201 --- 10.0.0.1 ping statistics --- 00:07:14.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.201 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:14.201 ************************************ 00:07:14.201 START TEST nvmf_filesystem_no_in_capsule 00:07:14.201 ************************************ 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3099594 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3099594 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3099594 ']' 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.201 07:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.201 [2024-07-15 07:42:58.128612] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:07:14.201 [2024-07-15 07:42:58.128653] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.201 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.201 [2024-07-15 07:42:58.193295] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.201 [2024-07-15 07:42:58.299214] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:14.201 [2024-07-15 07:42:58.299265] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:14.201 [2024-07-15 07:42:58.299277] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:14.201 [2024-07-15 07:42:58.299285] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:14.201 [2024-07-15 07:42:58.299292] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:14.201 [2024-07-15 07:42:58.299353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.201 [2024-07-15 07:42:58.299457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.201 [2024-07-15 07:42:58.299570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.201 [2024-07-15 07:42:58.299571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.460 07:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.460 07:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:14.460 07:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:14.460 07:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:14.460 07:42:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.460 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.460 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:14.460 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:14.460 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.460 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.460 [2024-07-15 07:42:59.040419] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.460 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.460 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:14.460 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.460 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.460 Malloc1 00:07:14.460 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.460 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:14.460 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.460 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.460 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.460 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:14.460 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.460 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.460 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.460 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.460 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.460 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.460 [2024-07-15 07:42:59.184084] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.461 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.461 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:14.461 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:14.461 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:14.461 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:14.461 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:14.461 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:14.461 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.461 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.461 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.461 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:14.461 { 00:07:14.461 "name": "Malloc1", 00:07:14.461 "aliases": [ 00:07:14.461 "5ae58c8f-1766-4a66-b9bf-4aa1b8d930d7" 00:07:14.461 ], 00:07:14.461 "product_name": "Malloc disk", 00:07:14.461 "block_size": 512, 00:07:14.461 "num_blocks": 1048576, 00:07:14.461 "uuid": "5ae58c8f-1766-4a66-b9bf-4aa1b8d930d7", 00:07:14.461 "assigned_rate_limits": { 00:07:14.461 "rw_ios_per_sec": 0, 00:07:14.461 "rw_mbytes_per_sec": 0, 00:07:14.461 "r_mbytes_per_sec": 0, 00:07:14.461 "w_mbytes_per_sec": 0 00:07:14.461 }, 00:07:14.461 "claimed": true, 00:07:14.461 "claim_type": "exclusive_write", 00:07:14.461 "zoned": false, 00:07:14.461 "supported_io_types": { 00:07:14.461 "read": true, 00:07:14.461 "write": true, 00:07:14.461 "unmap": true, 00:07:14.461 "flush": true, 00:07:14.461 "reset": true, 00:07:14.461 "nvme_admin": false, 00:07:14.461 "nvme_io": false, 00:07:14.461 "nvme_io_md": false, 00:07:14.461 "write_zeroes": true, 00:07:14.461 "zcopy": true, 00:07:14.461 "get_zone_info": false, 00:07:14.461 "zone_management": false, 00:07:14.461 "zone_append": false, 00:07:14.461 "compare": false, 00:07:14.461 "compare_and_write": false, 00:07:14.461 "abort": true, 00:07:14.461 "seek_hole": false, 00:07:14.461 "seek_data": false, 00:07:14.461 "copy": true, 00:07:14.461 "nvme_iov_md": false 00:07:14.461 }, 00:07:14.461 "memory_domains": [ 00:07:14.461 { 00:07:14.461 "dma_device_id": "system", 00:07:14.461 "dma_device_type": 1 00:07:14.461 }, 00:07:14.461 { 00:07:14.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.461 "dma_device_type": 2 00:07:14.461 } 00:07:14.461 ], 00:07:14.461 "driver_specific": {} 00:07:14.461 } 00:07:14.461 ]' 00:07:14.719 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:14.719 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:14.719 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:14.719 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:14.719 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:14.719 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:14.719 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:14.719 07:42:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:16.134 07:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:16.134 07:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:16.134 07:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:16.134 07:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:16.134 07:43:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:18.039 07:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:18.039 07:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:18.039 07:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:18.039 07:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:18.039 07:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:18.039 07:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:18.039 07:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:18.039 07:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:18.039 07:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:18.039 07:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:18.039 07:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:18.039 07:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:18.039 07:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:18.039 07:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:18.039 07:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:18.039 07:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:18.039 07:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:18.298 07:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:18.298 07:43:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:19.234 07:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:19.234 07:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:19.234 07:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:19.234 07:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.234 07:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.234 ************************************ 00:07:19.234 START TEST filesystem_ext4 00:07:19.234 ************************************ 00:07:19.234 07:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:19.234 07:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:19.234 07:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:19.234 07:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:19.234 07:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:19.234 07:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:19.234 07:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:19.234 07:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:19.234 07:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:19.234 07:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:19.234 07:43:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:19.234 mke2fs 1.46.5 (30-Dec-2021) 00:07:19.493 Discarding device blocks: 0/522240 done 00:07:19.493 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:19.493 Filesystem UUID: 86a27b1c-d380-4b94-84fa-ba183f72e4fc 00:07:19.493 Superblock backups stored on blocks: 00:07:19.493 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:19.493 00:07:19.493 Allocating group tables: 0/64 done 00:07:19.493 Writing inode tables: 0/64 done 00:07:19.493 Creating journal (8192 blocks): done 00:07:19.493 Writing superblocks and filesystem accounting information: 0/64 done 00:07:19.493 00:07:19.493 07:43:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:19.493 07:43:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:20.430 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:20.430 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:20.430 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:20.430 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:20.430 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:20.430 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:20.430 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3099594 00:07:20.430 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:20.430 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:20.430 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:20.430 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:20.430 00:07:20.430 real 0m1.231s 00:07:20.430 user 0m0.017s 00:07:20.430 sys 0m0.073s 00:07:20.430 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.430 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:20.430 ************************************ 00:07:20.430 END TEST filesystem_ext4 00:07:20.430 ************************************ 00:07:20.689 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:20.689 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:20.689 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:20.690 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.690 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.690 ************************************ 00:07:20.690 START TEST filesystem_btrfs 00:07:20.690 ************************************ 00:07:20.690 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:20.690 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:20.690 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:20.690 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:20.690 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:20.690 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:20.690 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:20.690 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:20.690 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:20.690 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:20.690 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:20.690 btrfs-progs v6.6.2 00:07:20.690 See https://btrfs.readthedocs.io for more information. 00:07:20.690 00:07:20.690 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:20.690 NOTE: several default settings have changed in version 5.15, please make sure 00:07:20.690 this does not affect your deployments: 00:07:20.690 - DUP for metadata (-m dup) 00:07:20.690 - enabled no-holes (-O no-holes) 00:07:20.690 - enabled free-space-tree (-R free-space-tree) 00:07:20.690 00:07:20.690 Label: (null) 00:07:20.690 UUID: 1004967c-a247-4615-b9b1-903776f46ac9 00:07:20.690 Node size: 16384 00:07:20.690 Sector size: 4096 00:07:20.690 Filesystem size: 510.00MiB 00:07:20.690 Block group profiles: 00:07:20.690 Data: single 8.00MiB 00:07:20.690 Metadata: DUP 32.00MiB 00:07:20.690 System: DUP 8.00MiB 00:07:20.690 SSD detected: yes 00:07:20.690 Zoned device: no 00:07:20.690 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:20.690 Runtime features: free-space-tree 00:07:20.690 Checksum: crc32c 00:07:20.690 Number of devices: 1 00:07:20.690 Devices: 00:07:20.690 ID SIZE PATH 00:07:20.690 1 510.00MiB /dev/nvme0n1p1 00:07:20.690 00:07:20.690 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:20.690 07:43:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:21.626 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:21.626 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:21.626 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:21.626 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:21.626 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:21.626 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:21.885 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3099594 00:07:21.885 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:21.885 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:21.885 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:21.885 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:21.885 00:07:21.885 real 0m1.177s 00:07:21.885 user 0m0.028s 00:07:21.885 sys 0m0.124s 00:07:21.885 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.885 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:21.885 ************************************ 00:07:21.885 END TEST filesystem_btrfs 00:07:21.885 ************************************ 00:07:21.885 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:21.885 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:21.885 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:21.885 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.885 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.885 ************************************ 00:07:21.885 START TEST filesystem_xfs 00:07:21.885 ************************************ 00:07:21.885 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:21.885 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:21.885 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:21.885 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:21.885 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:21.885 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:21.885 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:21.885 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:21.885 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:21.885 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:21.885 07:43:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:21.886 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:21.886 = sectsz=512 attr=2, projid32bit=1 00:07:21.886 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:21.886 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:21.886 data = bsize=4096 blocks=130560, imaxpct=25 00:07:21.886 = sunit=0 swidth=0 blks 00:07:21.886 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:21.886 log =internal log bsize=4096 blocks=16384, version=2 00:07:21.886 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:21.886 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:22.820 Discarding blocks...Done. 00:07:22.820 07:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:22.820 07:43:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:25.352 07:43:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:25.352 07:43:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:25.352 07:43:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:25.352 07:43:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:25.352 07:43:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:25.352 07:43:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:25.352 07:43:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3099594 00:07:25.352 07:43:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:25.352 07:43:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:25.352 07:43:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:25.352 07:43:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:25.352 00:07:25.352 real 0m3.513s 00:07:25.352 user 0m0.028s 00:07:25.352 sys 0m0.067s 00:07:25.352 07:43:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.352 07:43:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:25.352 ************************************ 00:07:25.352 END TEST filesystem_xfs 00:07:25.352 ************************************ 00:07:25.352 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:25.352 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:25.352 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:25.352 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:25.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:25.611 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:25.611 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:25.611 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:25.611 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:25.611 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:25.611 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:25.611 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:25.611 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:25.611 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.611 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.611 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.611 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:25.611 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3099594 00:07:25.611 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3099594 ']' 00:07:25.611 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3099594 00:07:25.611 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:25.611 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:25.611 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3099594 00:07:25.611 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:25.611 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:25.611 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3099594' 00:07:25.611 killing process with pid 3099594 00:07:25.611 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 3099594 00:07:25.611 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 3099594 00:07:26.179 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:26.179 00:07:26.179 real 0m12.570s 00:07:26.179 user 0m49.302s 00:07:26.179 sys 0m1.258s 00:07:26.179 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.179 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:26.179 ************************************ 00:07:26.179 END TEST nvmf_filesystem_no_in_capsule 00:07:26.179 ************************************ 00:07:26.179 07:43:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:26.179 07:43:10 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:26.179 07:43:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:26.179 07:43:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.179 07:43:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.179 ************************************ 00:07:26.179 START TEST nvmf_filesystem_in_capsule 00:07:26.179 ************************************ 00:07:26.179 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:26.179 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:26.179 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:26.179 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:26.179 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:26.179 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:26.179 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3101947 00:07:26.179 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3101947 00:07:26.179 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:26.179 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3101947 ']' 00:07:26.179 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.179 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.179 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.179 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.179 07:43:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:26.179 [2024-07-15 07:43:10.768328] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:07:26.179 [2024-07-15 07:43:10.768369] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.179 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.179 [2024-07-15 07:43:10.840494] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.179 [2024-07-15 07:43:10.920270] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.179 [2024-07-15 07:43:10.920307] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.179 [2024-07-15 07:43:10.920314] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:26.179 [2024-07-15 07:43:10.920320] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:26.179 [2024-07-15 07:43:10.920325] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.179 [2024-07-15 07:43:10.920366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.179 [2024-07-15 07:43:10.920478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.179 [2024-07-15 07:43:10.920582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.179 [2024-07-15 07:43:10.920584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.118 [2024-07-15 07:43:11.624175] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.118 Malloc1 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.118 [2024-07-15 07:43:11.766280] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:27.118 { 00:07:27.118 "name": "Malloc1", 00:07:27.118 "aliases": [ 00:07:27.118 "60bd8fa4-a376-4aa9-b245-7d0c65d8b03f" 00:07:27.118 ], 00:07:27.118 "product_name": "Malloc disk", 00:07:27.118 "block_size": 512, 00:07:27.118 "num_blocks": 1048576, 00:07:27.118 "uuid": "60bd8fa4-a376-4aa9-b245-7d0c65d8b03f", 00:07:27.118 "assigned_rate_limits": { 00:07:27.118 "rw_ios_per_sec": 0, 00:07:27.118 "rw_mbytes_per_sec": 0, 00:07:27.118 "r_mbytes_per_sec": 0, 00:07:27.118 "w_mbytes_per_sec": 0 00:07:27.118 }, 00:07:27.118 "claimed": true, 00:07:27.118 "claim_type": "exclusive_write", 00:07:27.118 "zoned": false, 00:07:27.118 "supported_io_types": { 00:07:27.118 "read": true, 00:07:27.118 "write": true, 00:07:27.118 "unmap": true, 00:07:27.118 "flush": true, 00:07:27.118 "reset": true, 00:07:27.118 "nvme_admin": false, 00:07:27.118 "nvme_io": false, 00:07:27.118 "nvme_io_md": false, 00:07:27.118 "write_zeroes": true, 00:07:27.118 "zcopy": true, 00:07:27.118 "get_zone_info": false, 00:07:27.118 "zone_management": false, 00:07:27.118 "zone_append": false, 00:07:27.118 "compare": false, 00:07:27.118 "compare_and_write": false, 00:07:27.118 "abort": true, 00:07:27.118 "seek_hole": false, 00:07:27.118 "seek_data": false, 00:07:27.118 "copy": true, 00:07:27.118 "nvme_iov_md": false 00:07:27.118 }, 00:07:27.118 "memory_domains": [ 00:07:27.118 { 00:07:27.118 "dma_device_id": "system", 00:07:27.118 "dma_device_type": 1 00:07:27.118 }, 00:07:27.118 { 00:07:27.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.118 "dma_device_type": 2 00:07:27.118 } 00:07:27.118 ], 00:07:27.118 "driver_specific": {} 00:07:27.118 } 00:07:27.118 ]' 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:27.118 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:27.377 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:27.377 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:27.377 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:27.377 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:27.377 07:43:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:28.313 07:43:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:28.313 07:43:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:28.313 07:43:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:28.313 07:43:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:28.313 07:43:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:30.844 07:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:30.844 07:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:30.844 07:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:30.844 07:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:30.844 07:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:30.844 07:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:30.844 07:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:30.844 07:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:30.844 07:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:30.844 07:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:30.844 07:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:30.844 07:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:30.844 07:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:30.844 07:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:30.844 07:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:30.844 07:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:30.844 07:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:30.844 07:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:30.844 07:43:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:32.219 07:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:32.219 07:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:32.219 07:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:32.219 07:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.219 07:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.219 ************************************ 00:07:32.220 START TEST filesystem_in_capsule_ext4 00:07:32.220 ************************************ 00:07:32.220 07:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:32.220 07:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:32.220 07:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:32.220 07:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:32.220 07:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:32.220 07:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:32.220 07:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:32.220 07:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:32.220 07:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:32.220 07:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:32.220 07:43:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:32.220 mke2fs 1.46.5 (30-Dec-2021) 00:07:32.220 Discarding device blocks: 0/522240 done 00:07:32.220 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:32.220 Filesystem UUID: ee600e83-9303-44b9-8a25-c7c247f5fb93 00:07:32.220 Superblock backups stored on blocks: 00:07:32.220 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:32.220 00:07:32.220 Allocating group tables: 0/64 done 00:07:32.220 Writing inode tables: 0/64 done 00:07:32.220 Creating journal (8192 blocks): done 00:07:33.045 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:07:33.045 00:07:33.045 07:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:33.045 07:43:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:33.612 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:33.612 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:33.612 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:33.612 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:33.612 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:33.612 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:33.612 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3101947 00:07:33.612 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:33.612 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:33.871 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:33.871 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:33.871 00:07:33.871 real 0m1.805s 00:07:33.871 user 0m0.020s 00:07:33.871 sys 0m0.070s 00:07:33.871 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.871 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:33.871 ************************************ 00:07:33.871 END TEST filesystem_in_capsule_ext4 00:07:33.871 ************************************ 00:07:33.871 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:33.871 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:33.871 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:33.871 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.871 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:33.871 ************************************ 00:07:33.871 START TEST filesystem_in_capsule_btrfs 00:07:33.871 ************************************ 00:07:33.871 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:33.871 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:33.871 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:33.871 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:33.871 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:33.871 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:33.871 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:33.871 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:33.871 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:33.871 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:33.871 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:34.145 btrfs-progs v6.6.2 00:07:34.145 See https://btrfs.readthedocs.io for more information. 00:07:34.145 00:07:34.145 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:34.145 NOTE: several default settings have changed in version 5.15, please make sure 00:07:34.145 this does not affect your deployments: 00:07:34.145 - DUP for metadata (-m dup) 00:07:34.145 - enabled no-holes (-O no-holes) 00:07:34.145 - enabled free-space-tree (-R free-space-tree) 00:07:34.145 00:07:34.145 Label: (null) 00:07:34.145 UUID: 410285ad-e251-4e3f-847d-cab4632ba46d 00:07:34.145 Node size: 16384 00:07:34.145 Sector size: 4096 00:07:34.145 Filesystem size: 510.00MiB 00:07:34.145 Block group profiles: 00:07:34.145 Data: single 8.00MiB 00:07:34.145 Metadata: DUP 32.00MiB 00:07:34.145 System: DUP 8.00MiB 00:07:34.145 SSD detected: yes 00:07:34.145 Zoned device: no 00:07:34.145 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:34.145 Runtime features: free-space-tree 00:07:34.145 Checksum: crc32c 00:07:34.145 Number of devices: 1 00:07:34.145 Devices: 00:07:34.145 ID SIZE PATH 00:07:34.145 1 510.00MiB /dev/nvme0n1p1 00:07:34.145 00:07:34.145 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:34.145 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:34.145 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:34.145 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:34.404 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:34.404 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:34.405 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:34.405 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:34.405 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3101947 00:07:34.405 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:34.405 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:34.405 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:34.405 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:34.405 00:07:34.405 real 0m0.499s 00:07:34.405 user 0m0.025s 00:07:34.405 sys 0m0.124s 00:07:34.405 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.405 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:34.405 ************************************ 00:07:34.405 END TEST filesystem_in_capsule_btrfs 00:07:34.405 ************************************ 00:07:34.405 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:34.405 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:34.405 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:34.405 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.405 07:43:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.405 ************************************ 00:07:34.405 START TEST filesystem_in_capsule_xfs 00:07:34.405 ************************************ 00:07:34.405 07:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:34.405 07:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:34.405 07:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:34.405 07:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:34.405 07:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:34.405 07:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:34.405 07:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:34.405 07:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:34.405 07:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:34.405 07:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:34.405 07:43:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:34.405 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:34.405 = sectsz=512 attr=2, projid32bit=1 00:07:34.405 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:34.405 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:34.405 data = bsize=4096 blocks=130560, imaxpct=25 00:07:34.405 = sunit=0 swidth=0 blks 00:07:34.405 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:34.405 log =internal log bsize=4096 blocks=16384, version=2 00:07:34.405 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:34.405 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:35.339 Discarding blocks...Done. 00:07:35.339 07:43:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:35.339 07:43:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:37.283 07:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:37.283 07:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:37.283 07:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:37.283 07:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:37.283 07:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:37.283 07:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:37.283 07:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3101947 00:07:37.283 07:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:37.283 07:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:37.283 07:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:37.283 07:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:37.283 00:07:37.283 real 0m2.923s 00:07:37.283 user 0m0.025s 00:07:37.283 sys 0m0.070s 00:07:37.283 07:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.283 07:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:37.283 ************************************ 00:07:37.283 END TEST filesystem_in_capsule_xfs 00:07:37.283 ************************************ 00:07:37.283 07:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:37.283 07:43:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:37.541 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:37.542 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:37.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:37.801 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:37.801 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:37.801 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:37.801 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:37.801 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:37.801 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:37.801 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:37.801 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:37.801 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.801 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.801 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.801 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:37.801 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3101947 00:07:37.801 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3101947 ']' 00:07:37.801 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3101947 00:07:37.801 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:37.801 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:37.801 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3101947 00:07:37.801 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:37.801 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:37.802 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3101947' 00:07:37.802 killing process with pid 3101947 00:07:37.802 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 3101947 00:07:37.802 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 3101947 00:07:38.061 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:38.061 00:07:38.061 real 0m12.068s 00:07:38.061 user 0m47.365s 00:07:38.061 sys 0m1.203s 00:07:38.061 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.061 07:43:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.061 ************************************ 00:07:38.061 END TEST nvmf_filesystem_in_capsule 00:07:38.061 ************************************ 00:07:38.320 07:43:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:38.320 07:43:22 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:38.320 07:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:38.320 07:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:38.320 07:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:38.320 07:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:38.320 07:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:38.320 07:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:38.320 rmmod nvme_tcp 00:07:38.320 rmmod nvme_fabrics 00:07:38.320 rmmod nvme_keyring 00:07:38.320 07:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:38.320 07:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:38.320 07:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:38.320 07:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:38.320 07:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:38.320 07:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:38.320 07:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:38.320 07:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:38.320 07:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:38.320 07:43:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.320 07:43:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:38.320 07:43:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.226 07:43:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:40.226 00:07:40.226 real 0m33.060s 00:07:40.226 user 1m38.498s 00:07:40.226 sys 0m7.048s 00:07:40.226 07:43:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.226 07:43:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.226 ************************************ 00:07:40.226 END TEST nvmf_filesystem 00:07:40.226 ************************************ 00:07:40.486 07:43:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:40.486 07:43:24 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:40.486 07:43:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:40.486 07:43:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.486 07:43:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:40.486 ************************************ 00:07:40.486 START TEST nvmf_target_discovery 00:07:40.486 ************************************ 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:40.486 * Looking for test storage... 00:07:40.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.486 07:43:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:40.487 07:43:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:47.061 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:47.061 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:47.061 Found net devices under 0000:86:00.0: cvl_0_0 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:47.061 Found net devices under 0000:86:00.1: cvl_0_1 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:47.061 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:47.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:07:47.062 00:07:47.062 --- 10.0.0.2 ping statistics --- 00:07:47.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.062 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:47.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:07:47.062 00:07:47.062 --- 10.0.0.1 ping statistics --- 00:07:47.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.062 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3107533 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3107533 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 3107533 ']' 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:47.062 07:43:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.062 [2024-07-15 07:43:31.023957] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:07:47.062 [2024-07-15 07:43:31.023998] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.062 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.062 [2024-07-15 07:43:31.093210] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.062 [2024-07-15 07:43:31.168910] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.062 [2024-07-15 07:43:31.168951] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.062 [2024-07-15 07:43:31.168958] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.062 [2024-07-15 07:43:31.168965] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.062 [2024-07-15 07:43:31.168970] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.062 [2024-07-15 07:43:31.169054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.062 [2024-07-15 07:43:31.169181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.062 [2024-07-15 07:43:31.169287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.062 [2024-07-15 07:43:31.169287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.321 [2024-07-15 07:43:31.868142] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.321 Null1 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.321 [2024-07-15 07:43:31.913678] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.321 Null2 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.321 Null3 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.321 07:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:47.322 07:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:47.322 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.322 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.322 Null4 00:07:47.322 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.322 07:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:47.322 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.322 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.322 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.322 07:43:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:47.322 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.322 07:43:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.322 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.322 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:47.322 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.322 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.322 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.322 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:47.322 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.322 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.322 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.322 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:47.322 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.322 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.322 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.322 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:07:47.579 00:07:47.579 Discovery Log Number of Records 6, Generation counter 6 00:07:47.579 =====Discovery Log Entry 0====== 00:07:47.579 trtype: tcp 00:07:47.579 adrfam: ipv4 00:07:47.579 subtype: current discovery subsystem 00:07:47.579 treq: not required 00:07:47.579 portid: 0 00:07:47.579 trsvcid: 4420 00:07:47.579 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:47.579 traddr: 10.0.0.2 00:07:47.579 eflags: explicit discovery connections, duplicate discovery information 00:07:47.579 sectype: none 00:07:47.579 =====Discovery Log Entry 1====== 00:07:47.579 trtype: tcp 00:07:47.579 adrfam: ipv4 00:07:47.579 subtype: nvme subsystem 00:07:47.579 treq: not required 00:07:47.579 portid: 0 00:07:47.579 trsvcid: 4420 00:07:47.579 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:47.579 traddr: 10.0.0.2 00:07:47.579 eflags: none 00:07:47.579 sectype: none 00:07:47.579 =====Discovery Log Entry 2====== 00:07:47.579 trtype: tcp 00:07:47.579 adrfam: ipv4 00:07:47.579 subtype: nvme subsystem 00:07:47.579 treq: not required 00:07:47.579 portid: 0 00:07:47.579 trsvcid: 4420 00:07:47.579 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:47.579 traddr: 10.0.0.2 00:07:47.579 eflags: none 00:07:47.579 sectype: none 00:07:47.579 =====Discovery Log Entry 3====== 00:07:47.579 trtype: tcp 00:07:47.579 adrfam: ipv4 00:07:47.579 subtype: nvme subsystem 00:07:47.579 treq: not required 00:07:47.579 portid: 0 00:07:47.579 trsvcid: 4420 00:07:47.579 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:47.579 traddr: 10.0.0.2 00:07:47.579 eflags: none 00:07:47.579 sectype: none 00:07:47.579 =====Discovery Log Entry 4====== 00:07:47.579 trtype: tcp 00:07:47.579 adrfam: ipv4 00:07:47.579 subtype: nvme subsystem 00:07:47.579 treq: not required 00:07:47.579 portid: 0 00:07:47.579 trsvcid: 4420 00:07:47.579 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:47.579 traddr: 10.0.0.2 00:07:47.579 eflags: none 00:07:47.579 sectype: none 00:07:47.579 =====Discovery Log Entry 5====== 00:07:47.579 trtype: tcp 00:07:47.579 adrfam: ipv4 00:07:47.579 subtype: discovery subsystem referral 00:07:47.579 treq: not required 00:07:47.579 portid: 0 00:07:47.579 trsvcid: 4430 00:07:47.579 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:47.579 traddr: 10.0.0.2 00:07:47.579 eflags: none 00:07:47.579 sectype: none 00:07:47.579 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:47.579 Perform nvmf subsystem discovery via RPC 00:07:47.579 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:47.579 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.579 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.579 [ 00:07:47.579 { 00:07:47.579 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:47.579 "subtype": "Discovery", 00:07:47.579 "listen_addresses": [ 00:07:47.579 { 00:07:47.579 "trtype": "TCP", 00:07:47.579 "adrfam": "IPv4", 00:07:47.579 "traddr": "10.0.0.2", 00:07:47.579 "trsvcid": "4420" 00:07:47.579 } 00:07:47.579 ], 00:07:47.579 "allow_any_host": true, 00:07:47.579 "hosts": [] 00:07:47.579 }, 00:07:47.579 { 00:07:47.579 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:47.579 "subtype": "NVMe", 00:07:47.579 "listen_addresses": [ 00:07:47.579 { 00:07:47.579 "trtype": "TCP", 00:07:47.579 "adrfam": "IPv4", 00:07:47.579 "traddr": "10.0.0.2", 00:07:47.579 "trsvcid": "4420" 00:07:47.579 } 00:07:47.579 ], 00:07:47.579 "allow_any_host": true, 00:07:47.579 "hosts": [], 00:07:47.579 "serial_number": "SPDK00000000000001", 00:07:47.579 "model_number": "SPDK bdev Controller", 00:07:47.579 "max_namespaces": 32, 00:07:47.579 "min_cntlid": 1, 00:07:47.579 "max_cntlid": 65519, 00:07:47.579 "namespaces": [ 00:07:47.579 { 00:07:47.579 "nsid": 1, 00:07:47.579 "bdev_name": "Null1", 00:07:47.579 "name": "Null1", 00:07:47.579 "nguid": "81E54954BA0540E3BE7D9A118C469B03", 00:07:47.579 "uuid": "81e54954-ba05-40e3-be7d-9a118c469b03" 00:07:47.579 } 00:07:47.579 ] 00:07:47.579 }, 00:07:47.579 { 00:07:47.579 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:47.579 "subtype": "NVMe", 00:07:47.579 "listen_addresses": [ 00:07:47.579 { 00:07:47.579 "trtype": "TCP", 00:07:47.579 "adrfam": "IPv4", 00:07:47.579 "traddr": "10.0.0.2", 00:07:47.579 "trsvcid": "4420" 00:07:47.579 } 00:07:47.579 ], 00:07:47.579 "allow_any_host": true, 00:07:47.579 "hosts": [], 00:07:47.580 "serial_number": "SPDK00000000000002", 00:07:47.580 "model_number": "SPDK bdev Controller", 00:07:47.580 "max_namespaces": 32, 00:07:47.580 "min_cntlid": 1, 00:07:47.580 "max_cntlid": 65519, 00:07:47.580 "namespaces": [ 00:07:47.580 { 00:07:47.580 "nsid": 1, 00:07:47.580 "bdev_name": "Null2", 00:07:47.580 "name": "Null2", 00:07:47.580 "nguid": "C3761A22F68044D7BB07937824A89181", 00:07:47.580 "uuid": "c3761a22-f680-44d7-bb07-937824a89181" 00:07:47.580 } 00:07:47.580 ] 00:07:47.580 }, 00:07:47.580 { 00:07:47.580 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:47.580 "subtype": "NVMe", 00:07:47.580 "listen_addresses": [ 00:07:47.580 { 00:07:47.580 "trtype": "TCP", 00:07:47.580 "adrfam": "IPv4", 00:07:47.580 "traddr": "10.0.0.2", 00:07:47.580 "trsvcid": "4420" 00:07:47.580 } 00:07:47.580 ], 00:07:47.580 "allow_any_host": true, 00:07:47.580 "hosts": [], 00:07:47.580 "serial_number": "SPDK00000000000003", 00:07:47.580 "model_number": "SPDK bdev Controller", 00:07:47.580 "max_namespaces": 32, 00:07:47.580 "min_cntlid": 1, 00:07:47.580 "max_cntlid": 65519, 00:07:47.580 "namespaces": [ 00:07:47.580 { 00:07:47.580 "nsid": 1, 00:07:47.580 "bdev_name": "Null3", 00:07:47.580 "name": "Null3", 00:07:47.580 "nguid": "566A406F55DC4BF2962208A0AC065876", 00:07:47.580 "uuid": "566a406f-55dc-4bf2-9622-08a0ac065876" 00:07:47.580 } 00:07:47.580 ] 00:07:47.580 }, 00:07:47.580 { 00:07:47.580 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:47.580 "subtype": "NVMe", 00:07:47.580 "listen_addresses": [ 00:07:47.580 { 00:07:47.580 "trtype": "TCP", 00:07:47.580 "adrfam": "IPv4", 00:07:47.580 "traddr": "10.0.0.2", 00:07:47.580 "trsvcid": "4420" 00:07:47.580 } 00:07:47.580 ], 00:07:47.580 "allow_any_host": true, 00:07:47.580 "hosts": [], 00:07:47.580 "serial_number": "SPDK00000000000004", 00:07:47.580 "model_number": "SPDK bdev Controller", 00:07:47.580 "max_namespaces": 32, 00:07:47.580 "min_cntlid": 1, 00:07:47.580 "max_cntlid": 65519, 00:07:47.580 "namespaces": [ 00:07:47.580 { 00:07:47.580 "nsid": 1, 00:07:47.580 "bdev_name": "Null4", 00:07:47.580 "name": "Null4", 00:07:47.580 "nguid": "E7D5621A9F6746FFB1047EE6EE458B1F", 00:07:47.580 "uuid": "e7d5621a-9f67-46ff-b104-7ee6ee458b1f" 00:07:47.580 } 00:07:47.580 ] 00:07:47.580 } 00:07:47.580 ] 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:47.580 07:43:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:47.581 rmmod nvme_tcp 00:07:47.581 rmmod nvme_fabrics 00:07:47.839 rmmod nvme_keyring 00:07:47.839 07:43:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:47.839 07:43:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:47.839 07:43:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:47.839 07:43:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3107533 ']' 00:07:47.839 07:43:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3107533 00:07:47.839 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 3107533 ']' 00:07:47.839 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 3107533 00:07:47.839 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:47.839 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:47.839 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3107533 00:07:47.839 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:47.839 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:47.839 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3107533' 00:07:47.839 killing process with pid 3107533 00:07:47.839 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 3107533 00:07:47.839 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 3107533 00:07:48.098 07:43:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:48.098 07:43:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:48.098 07:43:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:48.098 07:43:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:48.098 07:43:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:48.098 07:43:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.098 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.098 07:43:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.018 07:43:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:50.018 00:07:50.018 real 0m9.647s 00:07:50.018 user 0m7.533s 00:07:50.018 sys 0m4.716s 00:07:50.018 07:43:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.018 07:43:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:50.018 ************************************ 00:07:50.018 END TEST nvmf_target_discovery 00:07:50.018 ************************************ 00:07:50.018 07:43:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:50.018 07:43:34 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:50.018 07:43:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:50.018 07:43:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.018 07:43:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:50.018 ************************************ 00:07:50.018 START TEST nvmf_referrals 00:07:50.018 ************************************ 00:07:50.018 07:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:50.278 * Looking for test storage... 00:07:50.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:50.278 07:43:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:56.849 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:56.849 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:56.849 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:56.850 Found net devices under 0000:86:00.0: cvl_0_0 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:56.850 Found net devices under 0000:86:00.1: cvl_0_1 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:56.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:07:56.850 00:07:56.850 --- 10.0.0.2 ping statistics --- 00:07:56.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.850 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:56.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:07:56.850 00:07:56.850 --- 10.0.0.1 ping statistics --- 00:07:56.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.850 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3111314 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3111314 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 3111314 ']' 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:56.850 07:43:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.850 [2024-07-15 07:43:40.727128] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:07:56.850 [2024-07-15 07:43:40.727173] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.850 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.850 [2024-07-15 07:43:40.801251] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:56.850 [2024-07-15 07:43:40.880908] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.850 [2024-07-15 07:43:40.880945] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.850 [2024-07-15 07:43:40.880952] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.850 [2024-07-15 07:43:40.880958] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.850 [2024-07-15 07:43:40.880963] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.850 [2024-07-15 07:43:40.881007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.850 [2024-07-15 07:43:40.881035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.850 [2024-07-15 07:43:40.881142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.850 [2024-07-15 07:43:40.881143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.850 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.850 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:56.850 07:43:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:56.850 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:56.850 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.850 07:43:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.850 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:56.850 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.850 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.850 [2024-07-15 07:43:41.592080] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.850 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.850 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:56.850 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.850 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.108 [2024-07-15 07:43:41.605543] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:57.108 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:57.366 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:57.366 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:57.366 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:57.366 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.366 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.366 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.366 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:57.366 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.366 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.366 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.366 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:57.366 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.366 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.366 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.366 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:57.366 07:43:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:57.366 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.366 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.366 07:43:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.366 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:57.366 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:57.366 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:57.366 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:57.366 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:57.366 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:57.366 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:57.623 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:57.880 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:57.880 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:57.880 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:57.880 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:57.880 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:57.880 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:57.880 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:57.880 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:57.880 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:57.880 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:57.880 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:57.880 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:57.880 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:58.137 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:58.137 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:58.137 07:43:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.137 07:43:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:58.137 07:43:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.137 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:58.137 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:58.137 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:58.137 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:58.137 07:43:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.137 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:58.137 07:43:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:58.137 07:43:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.137 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:58.137 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:58.137 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:58.137 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:58.137 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:58.137 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:58.137 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:58.137 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:58.394 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:58.394 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:58.394 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:58.394 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:58.394 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:58.394 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:58.394 07:43:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:58.394 07:43:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:58.394 07:43:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:58.394 07:43:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:58.394 07:43:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:58.394 07:43:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:58.394 07:43:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:58.650 07:43:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:58.650 rmmod nvme_tcp 00:07:58.907 rmmod nvme_fabrics 00:07:58.907 rmmod nvme_keyring 00:07:58.907 07:43:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:58.907 07:43:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:58.907 07:43:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:58.907 07:43:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3111314 ']' 00:07:58.907 07:43:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3111314 00:07:58.907 07:43:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 3111314 ']' 00:07:58.907 07:43:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 3111314 00:07:58.907 07:43:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:58.907 07:43:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:58.907 07:43:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3111314 00:07:58.907 07:43:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:58.907 07:43:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:58.907 07:43:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3111314' 00:07:58.907 killing process with pid 3111314 00:07:58.907 07:43:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 3111314 00:07:58.907 07:43:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 3111314 00:07:59.166 07:43:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:59.166 07:43:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:59.166 07:43:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:59.166 07:43:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:59.166 07:43:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:59.166 07:43:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.166 07:43:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.166 07:43:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.072 07:43:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:01.072 00:08:01.072 real 0m11.012s 00:08:01.073 user 0m13.612s 00:08:01.073 sys 0m5.123s 00:08:01.073 07:43:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.073 07:43:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:01.073 ************************************ 00:08:01.073 END TEST nvmf_referrals 00:08:01.073 ************************************ 00:08:01.073 07:43:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:01.073 07:43:45 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:01.073 07:43:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:01.073 07:43:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.073 07:43:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:01.332 ************************************ 00:08:01.332 START TEST nvmf_connect_disconnect 00:08:01.332 ************************************ 00:08:01.332 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:01.332 * Looking for test storage... 00:08:01.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.332 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:01.333 07:43:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:07.971 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:07.971 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:07.971 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:07.971 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:07.971 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:07.971 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:07.971 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:07.971 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:07.971 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:07.971 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:07.971 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:07.971 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:07.971 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:07.971 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:07.971 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:07.971 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.971 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:07.972 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:07.972 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:07.972 Found net devices under 0000:86:00.0: cvl_0_0 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:07.972 Found net devices under 0000:86:00.1: cvl_0_1 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:07.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:08:07.972 00:08:07.972 --- 10.0.0.2 ping statistics --- 00:08:07.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.972 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:07.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:08:07.972 00:08:07.972 --- 10.0.0.1 ping statistics --- 00:08:07.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.972 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3115390 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3115390 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 3115390 ']' 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:07.972 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.973 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:07.973 07:43:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:07.973 [2024-07-15 07:43:51.809754] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:08:07.973 [2024-07-15 07:43:51.809804] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.973 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.973 [2024-07-15 07:43:51.881528] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.973 [2024-07-15 07:43:51.961530] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.973 [2024-07-15 07:43:51.961565] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.973 [2024-07-15 07:43:51.961572] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:07.973 [2024-07-15 07:43:51.961578] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:07.973 [2024-07-15 07:43:51.961583] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.973 [2024-07-15 07:43:51.961628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.973 [2024-07-15 07:43:51.961739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.973 [2024-07-15 07:43:51.961845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.973 [2024-07-15 07:43:51.961846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:07.973 [2024-07-15 07:43:52.672282] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.973 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:07.973 [2024-07-15 07:43:52.724011] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.232 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.232 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:08.232 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:08.232 07:43:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:11.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:14.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.614 07:44:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:24.614 07:44:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:24.614 07:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:24.614 07:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:24.614 07:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:24.614 07:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:24.614 07:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:24.614 07:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:24.614 rmmod nvme_tcp 00:08:24.614 rmmod nvme_fabrics 00:08:24.614 rmmod nvme_keyring 00:08:24.614 07:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:24.614 07:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:24.614 07:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:24.614 07:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3115390 ']' 00:08:24.614 07:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3115390 00:08:24.614 07:44:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3115390 ']' 00:08:24.614 07:44:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 3115390 00:08:24.614 07:44:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:24.614 07:44:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:24.614 07:44:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3115390 00:08:24.614 07:44:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:24.614 07:44:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:24.614 07:44:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3115390' 00:08:24.614 killing process with pid 3115390 00:08:24.614 07:44:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 3115390 00:08:24.614 07:44:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 3115390 00:08:24.614 07:44:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:24.614 07:44:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:24.614 07:44:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:24.614 07:44:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:24.614 07:44:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:24.614 07:44:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.614 07:44:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.614 07:44:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.521 07:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:26.521 00:08:26.521 real 0m25.416s 00:08:26.521 user 1m10.083s 00:08:26.521 sys 0m5.569s 00:08:26.521 07:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.521 07:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:26.521 ************************************ 00:08:26.521 END TEST nvmf_connect_disconnect 00:08:26.521 ************************************ 00:08:26.781 07:44:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:26.781 07:44:11 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:26.781 07:44:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:26.781 07:44:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.781 07:44:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:26.781 ************************************ 00:08:26.781 START TEST nvmf_multitarget 00:08:26.781 ************************************ 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:26.781 * Looking for test storage... 00:08:26.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:26.781 07:44:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:26.782 07:44:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:33.366 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:33.366 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:33.367 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:33.367 Found net devices under 0000:86:00.0: cvl_0_0 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:33.367 Found net devices under 0000:86:00.1: cvl_0_1 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:33.367 07:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:33.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:08:33.367 00:08:33.367 --- 10.0.0.2 ping statistics --- 00:08:33.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.367 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:33.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:08:33.367 00:08:33.367 --- 10.0.0.1 ping statistics --- 00:08:33.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.367 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3121784 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3121784 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 3121784 ']' 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:33.367 07:44:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:33.367 [2024-07-15 07:44:17.283020] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:08:33.367 [2024-07-15 07:44:17.283064] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.367 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.367 [2024-07-15 07:44:17.340786] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.367 [2024-07-15 07:44:17.420668] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.367 [2024-07-15 07:44:17.420707] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.367 [2024-07-15 07:44:17.420713] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.367 [2024-07-15 07:44:17.420719] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.367 [2024-07-15 07:44:17.420727] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.367 [2024-07-15 07:44:17.424244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.367 [2024-07-15 07:44:17.424280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.367 [2024-07-15 07:44:17.424387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.367 [2024-07-15 07:44:17.424387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.625 07:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:33.625 07:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:33.625 07:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:33.625 07:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:33.625 07:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:33.625 07:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.625 07:44:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:33.625 07:44:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:33.625 07:44:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:33.625 07:44:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:33.625 07:44:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:33.625 "nvmf_tgt_1" 00:08:33.625 07:44:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:33.883 "nvmf_tgt_2" 00:08:33.883 07:44:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:33.883 07:44:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:33.883 07:44:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:33.883 07:44:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:34.140 true 00:08:34.140 07:44:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:34.140 true 00:08:34.140 07:44:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:34.140 07:44:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:34.140 07:44:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:34.140 07:44:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:34.140 07:44:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:34.140 07:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:34.140 07:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:34.140 07:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:34.140 07:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:34.140 07:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:34.140 07:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:34.140 rmmod nvme_tcp 00:08:34.399 rmmod nvme_fabrics 00:08:34.399 rmmod nvme_keyring 00:08:34.399 07:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:34.399 07:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:34.399 07:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:34.399 07:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3121784 ']' 00:08:34.399 07:44:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3121784 00:08:34.399 07:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 3121784 ']' 00:08:34.399 07:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 3121784 00:08:34.399 07:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:34.399 07:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:34.399 07:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3121784 00:08:34.399 07:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:34.399 07:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:34.399 07:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3121784' 00:08:34.399 killing process with pid 3121784 00:08:34.399 07:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 3121784 00:08:34.399 07:44:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 3121784 00:08:34.658 07:44:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:34.658 07:44:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:34.658 07:44:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:34.658 07:44:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:34.658 07:44:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:34.658 07:44:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.658 07:44:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.658 07:44:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.614 07:44:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:36.614 00:08:36.614 real 0m9.915s 00:08:36.614 user 0m9.264s 00:08:36.614 sys 0m4.852s 00:08:36.614 07:44:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.614 07:44:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:36.614 ************************************ 00:08:36.614 END TEST nvmf_multitarget 00:08:36.614 ************************************ 00:08:36.614 07:44:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:36.614 07:44:21 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:36.614 07:44:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:36.614 07:44:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.614 07:44:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:36.614 ************************************ 00:08:36.614 START TEST nvmf_rpc 00:08:36.614 ************************************ 00:08:36.614 07:44:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:36.873 * Looking for test storage... 00:08:36.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:36.873 07:44:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:36.873 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:36.873 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.873 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.873 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.873 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.873 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.873 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.873 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.873 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.873 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.873 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.873 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:36.873 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:36.873 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.873 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.873 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:36.873 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.873 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:36.873 07:44:21 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.873 07:44:21 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.873 07:44:21 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.873 07:44:21 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.873 07:44:21 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.873 07:44:21 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.874 07:44:21 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:36.874 07:44:21 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.874 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:36.874 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:36.874 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:36.874 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.874 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.874 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.874 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:36.874 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:36.874 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:36.874 07:44:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:36.874 07:44:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:36.874 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:36.874 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.874 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:36.874 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:36.874 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:36.874 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.874 07:44:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.874 07:44:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.874 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:36.874 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:36.874 07:44:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:36.874 07:44:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:43.442 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:43.442 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:43.442 Found net devices under 0000:86:00.0: cvl_0_0 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:43.442 Found net devices under 0000:86:00.1: cvl_0_1 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:43.442 07:44:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:43.442 07:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:43.442 07:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:43.442 07:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:43.442 07:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:43.442 07:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:43.443 07:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:43.443 07:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:43.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:08:43.443 00:08:43.443 --- 10.0.0.2 ping statistics --- 00:08:43.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.443 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:08:43.443 07:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:43.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:08:43.443 00:08:43.443 --- 10.0.0.1 ping statistics --- 00:08:43.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.443 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:08:43.443 07:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.443 07:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:43.443 07:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:43.443 07:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.443 07:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:43.443 07:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:43.443 07:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.443 07:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:43.443 07:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:43.443 07:44:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:43.443 07:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:43.443 07:44:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:43.443 07:44:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.443 07:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3125572 00:08:43.443 07:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3125572 00:08:43.443 07:44:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:43.443 07:44:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 3125572 ']' 00:08:43.443 07:44:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.443 07:44:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:43.443 07:44:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.443 07:44:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:43.443 07:44:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.443 [2024-07-15 07:44:27.283958] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:08:43.443 [2024-07-15 07:44:27.284005] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.443 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.443 [2024-07-15 07:44:27.358080] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.443 [2024-07-15 07:44:27.440923] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.443 [2024-07-15 07:44:27.440958] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.443 [2024-07-15 07:44:27.440965] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.443 [2024-07-15 07:44:27.440974] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.443 [2024-07-15 07:44:27.440979] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.443 [2024-07-15 07:44:27.441029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.443 [2024-07-15 07:44:27.441169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.443 [2024-07-15 07:44:27.441277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.443 [2024-07-15 07:44:27.441278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:43.443 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:43.443 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:43.443 07:44:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:43.443 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:43.443 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.443 07:44:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.443 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:43.443 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.443 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.443 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.443 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:43.443 "tick_rate": 2300000000, 00:08:43.443 "poll_groups": [ 00:08:43.443 { 00:08:43.443 "name": "nvmf_tgt_poll_group_000", 00:08:43.443 "admin_qpairs": 0, 00:08:43.443 "io_qpairs": 0, 00:08:43.443 "current_admin_qpairs": 0, 00:08:43.443 "current_io_qpairs": 0, 00:08:43.443 "pending_bdev_io": 0, 00:08:43.443 "completed_nvme_io": 0, 00:08:43.443 "transports": [] 00:08:43.443 }, 00:08:43.443 { 00:08:43.443 "name": "nvmf_tgt_poll_group_001", 00:08:43.443 "admin_qpairs": 0, 00:08:43.443 "io_qpairs": 0, 00:08:43.443 "current_admin_qpairs": 0, 00:08:43.443 "current_io_qpairs": 0, 00:08:43.443 "pending_bdev_io": 0, 00:08:43.443 "completed_nvme_io": 0, 00:08:43.443 "transports": [] 00:08:43.443 }, 00:08:43.443 { 00:08:43.443 "name": "nvmf_tgt_poll_group_002", 00:08:43.443 "admin_qpairs": 0, 00:08:43.443 "io_qpairs": 0, 00:08:43.443 "current_admin_qpairs": 0, 00:08:43.443 "current_io_qpairs": 0, 00:08:43.443 "pending_bdev_io": 0, 00:08:43.443 "completed_nvme_io": 0, 00:08:43.443 "transports": [] 00:08:43.443 }, 00:08:43.443 { 00:08:43.443 "name": "nvmf_tgt_poll_group_003", 00:08:43.443 "admin_qpairs": 0, 00:08:43.443 "io_qpairs": 0, 00:08:43.443 "current_admin_qpairs": 0, 00:08:43.443 "current_io_qpairs": 0, 00:08:43.443 "pending_bdev_io": 0, 00:08:43.443 "completed_nvme_io": 0, 00:08:43.443 "transports": [] 00:08:43.443 } 00:08:43.443 ] 00:08:43.443 }' 00:08:43.443 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:43.443 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:43.443 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:43.443 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.701 [2024-07-15 07:44:28.271641] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:43.701 "tick_rate": 2300000000, 00:08:43.701 "poll_groups": [ 00:08:43.701 { 00:08:43.701 "name": "nvmf_tgt_poll_group_000", 00:08:43.701 "admin_qpairs": 0, 00:08:43.701 "io_qpairs": 0, 00:08:43.701 "current_admin_qpairs": 0, 00:08:43.701 "current_io_qpairs": 0, 00:08:43.701 "pending_bdev_io": 0, 00:08:43.701 "completed_nvme_io": 0, 00:08:43.701 "transports": [ 00:08:43.701 { 00:08:43.701 "trtype": "TCP" 00:08:43.701 } 00:08:43.701 ] 00:08:43.701 }, 00:08:43.701 { 00:08:43.701 "name": "nvmf_tgt_poll_group_001", 00:08:43.701 "admin_qpairs": 0, 00:08:43.701 "io_qpairs": 0, 00:08:43.701 "current_admin_qpairs": 0, 00:08:43.701 "current_io_qpairs": 0, 00:08:43.701 "pending_bdev_io": 0, 00:08:43.701 "completed_nvme_io": 0, 00:08:43.701 "transports": [ 00:08:43.701 { 00:08:43.701 "trtype": "TCP" 00:08:43.701 } 00:08:43.701 ] 00:08:43.701 }, 00:08:43.701 { 00:08:43.701 "name": "nvmf_tgt_poll_group_002", 00:08:43.701 "admin_qpairs": 0, 00:08:43.701 "io_qpairs": 0, 00:08:43.701 "current_admin_qpairs": 0, 00:08:43.701 "current_io_qpairs": 0, 00:08:43.701 "pending_bdev_io": 0, 00:08:43.701 "completed_nvme_io": 0, 00:08:43.701 "transports": [ 00:08:43.701 { 00:08:43.701 "trtype": "TCP" 00:08:43.701 } 00:08:43.701 ] 00:08:43.701 }, 00:08:43.701 { 00:08:43.701 "name": "nvmf_tgt_poll_group_003", 00:08:43.701 "admin_qpairs": 0, 00:08:43.701 "io_qpairs": 0, 00:08:43.701 "current_admin_qpairs": 0, 00:08:43.701 "current_io_qpairs": 0, 00:08:43.701 "pending_bdev_io": 0, 00:08:43.701 "completed_nvme_io": 0, 00:08:43.701 "transports": [ 00:08:43.701 { 00:08:43.701 "trtype": "TCP" 00:08:43.701 } 00:08:43.701 ] 00:08:43.701 } 00:08:43.701 ] 00:08:43.701 }' 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.701 Malloc1 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.701 [2024-07-15 07:44:28.439665] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:43.701 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:43.702 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:43.958 [2024-07-15 07:44:28.468277] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:08:43.958 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:43.958 could not add new controller: failed to write to nvme-fabrics device 00:08:43.958 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:43.958 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:43.958 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:43.958 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:43.958 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:43.958 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.958 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.958 07:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.958 07:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:45.326 07:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:45.326 07:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:45.326 07:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:45.326 07:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:45.326 07:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:47.218 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:47.218 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:47.218 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:47.218 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:47.218 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:47.218 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:47.218 07:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:47.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.218 07:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:47.218 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:47.218 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:47.218 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:47.218 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:47.218 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:47.219 [2024-07-15 07:44:31.861105] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:08:47.219 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:47.219 could not add new controller: failed to write to nvme-fabrics device 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.219 07:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:48.587 07:44:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:48.587 07:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:48.587 07:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:48.587 07:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:48.587 07:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:50.479 07:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:50.479 07:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:50.479 07:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:50.479 07:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:50.479 07:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:50.479 07:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:50.479 07:44:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:50.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.479 [2024-07-15 07:44:35.151698] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.479 07:44:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:51.844 07:44:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:51.844 07:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:51.844 07:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:51.844 07:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:51.844 07:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:53.736 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:53.736 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:53.736 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:53.736 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:53.736 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:53.736 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:53.736 07:44:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:53.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.736 07:44:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:53.736 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:53.736 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:53.736 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.736 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.737 [2024-07-15 07:44:38.432602] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.737 07:44:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:55.105 07:44:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:55.105 07:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:55.105 07:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:55.105 07:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:55.105 07:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:56.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.998 [2024-07-15 07:44:41.713023] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.998 07:44:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:56.999 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.999 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.999 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.999 07:44:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:56.999 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.999 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.999 07:44:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.999 07:44:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:58.367 07:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:58.367 07:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:58.367 07:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:58.367 07:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:58.367 07:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:00.260 07:44:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:00.260 07:44:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:00.260 07:44:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:00.260 07:44:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:00.260 07:44:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:00.260 07:44:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:00.260 07:44:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:00.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.260 07:44:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:00.260 07:44:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:00.260 07:44:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:00.260 07:44:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.260 07:44:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:00.260 07:44:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.260 07:44:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:00.260 07:44:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:00.260 07:44:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.260 07:44:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.517 07:44:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.517 07:44:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:00.517 07:44:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.517 07:44:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.517 07:44:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.517 07:44:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:00.517 07:44:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:00.517 07:44:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.517 07:44:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.517 07:44:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.517 07:44:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.517 07:44:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.517 07:44:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.517 [2024-07-15 07:44:45.038331] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.517 07:44:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.517 07:44:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:00.517 07:44:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.517 07:44:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.517 07:44:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.517 07:44:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:00.517 07:44:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.517 07:44:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.517 07:44:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.517 07:44:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:01.478 07:44:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:01.478 07:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:01.478 07:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:01.478 07:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:01.478 07:44:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:04.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.004 [2024-07-15 07:44:48.328538] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.004 07:44:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:04.936 07:44:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:04.936 07:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:04.936 07:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:04.936 07:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:04.936 07:44:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:06.831 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:06.831 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:06.831 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:06.831 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:06.831 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:06.831 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:06.831 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:06.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.831 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:06.831 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:06.831 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:06.831 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:06.831 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:06.831 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:06.831 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:06.831 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:06.831 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.831 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.090 [2024-07-15 07:44:51.618398] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.090 [2024-07-15 07:44:51.666503] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.090 [2024-07-15 07:44:51.718667] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.090 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.091 [2024-07-15 07:44:51.766846] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.091 [2024-07-15 07:44:51.815002] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.091 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.349 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.349 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:07.349 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.349 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.349 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.349 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:07.349 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.349 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.349 07:44:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.349 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:07.349 "tick_rate": 2300000000, 00:09:07.349 "poll_groups": [ 00:09:07.349 { 00:09:07.349 "name": "nvmf_tgt_poll_group_000", 00:09:07.349 "admin_qpairs": 2, 00:09:07.349 "io_qpairs": 168, 00:09:07.349 "current_admin_qpairs": 0, 00:09:07.349 "current_io_qpairs": 0, 00:09:07.349 "pending_bdev_io": 0, 00:09:07.349 "completed_nvme_io": 252, 00:09:07.350 "transports": [ 00:09:07.350 { 00:09:07.350 "trtype": "TCP" 00:09:07.350 } 00:09:07.350 ] 00:09:07.350 }, 00:09:07.350 { 00:09:07.350 "name": "nvmf_tgt_poll_group_001", 00:09:07.350 "admin_qpairs": 2, 00:09:07.350 "io_qpairs": 168, 00:09:07.350 "current_admin_qpairs": 0, 00:09:07.350 "current_io_qpairs": 0, 00:09:07.350 "pending_bdev_io": 0, 00:09:07.350 "completed_nvme_io": 279, 00:09:07.350 "transports": [ 00:09:07.350 { 00:09:07.350 "trtype": "TCP" 00:09:07.350 } 00:09:07.350 ] 00:09:07.350 }, 00:09:07.350 { 00:09:07.350 "name": "nvmf_tgt_poll_group_002", 00:09:07.350 "admin_qpairs": 1, 00:09:07.350 "io_qpairs": 168, 00:09:07.350 "current_admin_qpairs": 0, 00:09:07.350 "current_io_qpairs": 0, 00:09:07.350 "pending_bdev_io": 0, 00:09:07.350 "completed_nvme_io": 242, 00:09:07.350 "transports": [ 00:09:07.350 { 00:09:07.350 "trtype": "TCP" 00:09:07.350 } 00:09:07.350 ] 00:09:07.350 }, 00:09:07.350 { 00:09:07.350 "name": "nvmf_tgt_poll_group_003", 00:09:07.350 "admin_qpairs": 2, 00:09:07.350 "io_qpairs": 168, 00:09:07.350 "current_admin_qpairs": 0, 00:09:07.350 "current_io_qpairs": 0, 00:09:07.350 "pending_bdev_io": 0, 00:09:07.350 "completed_nvme_io": 249, 00:09:07.350 "transports": [ 00:09:07.350 { 00:09:07.350 "trtype": "TCP" 00:09:07.350 } 00:09:07.350 ] 00:09:07.350 } 00:09:07.350 ] 00:09:07.350 }' 00:09:07.350 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:07.350 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:07.350 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:07.350 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:07.350 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:07.350 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:07.350 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:07.350 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:07.350 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:07.350 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:09:07.350 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:07.350 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:07.350 07:44:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:07.350 07:44:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:07.350 07:44:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:07.350 07:44:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:07.350 07:44:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:07.350 07:44:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:07.350 07:44:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:07.350 rmmod nvme_tcp 00:09:07.350 rmmod nvme_fabrics 00:09:07.350 rmmod nvme_keyring 00:09:07.350 07:44:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:07.350 07:44:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:07.350 07:44:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:07.350 07:44:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3125572 ']' 00:09:07.350 07:44:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3125572 00:09:07.350 07:44:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 3125572 ']' 00:09:07.350 07:44:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 3125572 00:09:07.350 07:44:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:09:07.350 07:44:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:07.350 07:44:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3125572 00:09:07.350 07:44:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:07.350 07:44:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:07.350 07:44:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3125572' 00:09:07.350 killing process with pid 3125572 00:09:07.350 07:44:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 3125572 00:09:07.350 07:44:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 3125572 00:09:07.609 07:44:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:07.609 07:44:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:07.609 07:44:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:07.609 07:44:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:07.609 07:44:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:07.609 07:44:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.609 07:44:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:07.609 07:44:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.145 07:44:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:10.145 00:09:10.145 real 0m33.035s 00:09:10.145 user 1m40.887s 00:09:10.145 sys 0m6.038s 00:09:10.145 07:44:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:10.145 07:44:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.145 ************************************ 00:09:10.145 END TEST nvmf_rpc 00:09:10.145 ************************************ 00:09:10.145 07:44:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:10.145 07:44:54 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:10.145 07:44:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:10.145 07:44:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.145 07:44:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:10.145 ************************************ 00:09:10.145 START TEST nvmf_invalid 00:09:10.145 ************************************ 00:09:10.145 07:44:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:10.145 * Looking for test storage... 00:09:10.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.145 07:44:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.145 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:10.145 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.145 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.145 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.145 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.145 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.145 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.145 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.145 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.145 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.145 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.145 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:10.145 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:10.145 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.145 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.145 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.145 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.145 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.145 07:44:54 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.145 07:44:54 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.145 07:44:54 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.145 07:44:54 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:10.146 07:44:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:15.419 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:15.419 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:15.419 Found net devices under 0000:86:00.0: cvl_0_0 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:15.419 Found net devices under 0000:86:00.1: cvl_0_1 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:15.419 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:15.679 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:15.679 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:15.679 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:15.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:15.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:09:15.679 00:09:15.679 --- 10.0.0.2 ping statistics --- 00:09:15.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.679 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:09:15.679 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:15.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:15.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:09:15.679 00:09:15.679 --- 10.0.0.1 ping statistics --- 00:09:15.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.679 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:09:15.679 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:15.679 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:15.679 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:15.679 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:15.679 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:15.679 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:15.679 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:15.679 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:15.679 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:15.679 07:45:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:15.679 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:15.679 07:45:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:15.679 07:45:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:15.679 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3133447 00:09:15.679 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:15.679 07:45:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3133447 00:09:15.679 07:45:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 3133447 ']' 00:09:15.680 07:45:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.680 07:45:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.680 07:45:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.680 07:45:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.680 07:45:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:15.680 [2024-07-15 07:45:00.358013] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:09:15.680 [2024-07-15 07:45:00.358056] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.680 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.680 [2024-07-15 07:45:00.430681] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.939 [2024-07-15 07:45:00.511373] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.939 [2024-07-15 07:45:00.511413] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.939 [2024-07-15 07:45:00.511420] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.939 [2024-07-15 07:45:00.511427] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.939 [2024-07-15 07:45:00.511432] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.939 [2024-07-15 07:45:00.511484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.939 [2024-07-15 07:45:00.511591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.939 [2024-07-15 07:45:00.511696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.939 [2024-07-15 07:45:00.511697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.506 07:45:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:16.506 07:45:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:16.506 07:45:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:16.506 07:45:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:16.506 07:45:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:16.506 07:45:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.506 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:16.506 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode13101 00:09:16.765 [2024-07-15 07:45:01.382711] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:16.765 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:16.765 { 00:09:16.765 "nqn": "nqn.2016-06.io.spdk:cnode13101", 00:09:16.765 "tgt_name": "foobar", 00:09:16.765 "method": "nvmf_create_subsystem", 00:09:16.765 "req_id": 1 00:09:16.765 } 00:09:16.765 Got JSON-RPC error response 00:09:16.765 response: 00:09:16.765 { 00:09:16.765 "code": -32603, 00:09:16.765 "message": "Unable to find target foobar" 00:09:16.765 }' 00:09:16.765 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:16.765 { 00:09:16.765 "nqn": "nqn.2016-06.io.spdk:cnode13101", 00:09:16.765 "tgt_name": "foobar", 00:09:16.765 "method": "nvmf_create_subsystem", 00:09:16.765 "req_id": 1 00:09:16.765 } 00:09:16.765 Got JSON-RPC error response 00:09:16.765 response: 00:09:16.765 { 00:09:16.765 "code": -32603, 00:09:16.765 "message": "Unable to find target foobar" 00:09:16.765 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:16.765 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:16.765 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21910 00:09:17.024 [2024-07-15 07:45:01.579418] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21910: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:17.024 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:17.024 { 00:09:17.024 "nqn": "nqn.2016-06.io.spdk:cnode21910", 00:09:17.024 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:17.024 "method": "nvmf_create_subsystem", 00:09:17.024 "req_id": 1 00:09:17.024 } 00:09:17.024 Got JSON-RPC error response 00:09:17.024 response: 00:09:17.024 { 00:09:17.024 "code": -32602, 00:09:17.024 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:17.024 }' 00:09:17.024 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:17.024 { 00:09:17.024 "nqn": "nqn.2016-06.io.spdk:cnode21910", 00:09:17.024 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:17.024 "method": "nvmf_create_subsystem", 00:09:17.024 "req_id": 1 00:09:17.024 } 00:09:17.024 Got JSON-RPC error response 00:09:17.024 response: 00:09:17.024 { 00:09:17.024 "code": -32602, 00:09:17.024 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:17.024 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:17.024 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:17.024 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode31241 00:09:17.024 [2024-07-15 07:45:01.768001] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31241: invalid model number 'SPDK_Controller' 00:09:17.283 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:17.283 { 00:09:17.283 "nqn": "nqn.2016-06.io.spdk:cnode31241", 00:09:17.283 "model_number": "SPDK_Controller\u001f", 00:09:17.283 "method": "nvmf_create_subsystem", 00:09:17.283 "req_id": 1 00:09:17.283 } 00:09:17.283 Got JSON-RPC error response 00:09:17.283 response: 00:09:17.283 { 00:09:17.283 "code": -32602, 00:09:17.283 "message": "Invalid MN SPDK_Controller\u001f" 00:09:17.283 }' 00:09:17.283 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:17.283 { 00:09:17.283 "nqn": "nqn.2016-06.io.spdk:cnode31241", 00:09:17.283 "model_number": "SPDK_Controller\u001f", 00:09:17.283 "method": "nvmf_create_subsystem", 00:09:17.283 "req_id": 1 00:09:17.283 } 00:09:17.283 Got JSON-RPC error response 00:09:17.283 response: 00:09:17.283 { 00:09:17.283 "code": -32602, 00:09:17.283 "message": "Invalid MN SPDK_Controller\u001f" 00:09:17.283 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:17.283 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:17.283 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ = == \- ]] 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '=p!-767<).aLX%pSqVf7r' 00:09:17.284 07:45:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '=p!-767<).aLX%pSqVf7r' nqn.2016-06.io.spdk:cnode6884 00:09:17.544 [2024-07-15 07:45:02.093114] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6884: invalid serial number '=p!-767<).aLX%pSqVf7r' 00:09:17.544 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:17.544 { 00:09:17.544 "nqn": "nqn.2016-06.io.spdk:cnode6884", 00:09:17.544 "serial_number": "=p!-767<).aLX%pSqVf7r", 00:09:17.544 "method": "nvmf_create_subsystem", 00:09:17.544 "req_id": 1 00:09:17.544 } 00:09:17.544 Got JSON-RPC error response 00:09:17.544 response: 00:09:17.544 { 00:09:17.544 "code": -32602, 00:09:17.544 "message": "Invalid SN =p!-767<).aLX%pSqVf7r" 00:09:17.544 }' 00:09:17.544 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:17.544 { 00:09:17.544 "nqn": "nqn.2016-06.io.spdk:cnode6884", 00:09:17.544 "serial_number": "=p!-767<).aLX%pSqVf7r", 00:09:17.544 "method": "nvmf_create_subsystem", 00:09:17.544 "req_id": 1 00:09:17.544 } 00:09:17.544 Got JSON-RPC error response 00:09:17.544 response: 00:09:17.544 { 00:09:17.544 "code": -32602, 00:09:17.544 "message": "Invalid SN =p!-767<).aLX%pSqVf7r" 00:09:17.544 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:17.544 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:17.544 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:17.544 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:17.544 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:17.544 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:17.544 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:17.544 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.544 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:09:17.544 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:17.544 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:09:17.544 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.544 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.544 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:17.544 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:17.544 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:17.544 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.544 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.544 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:17.544 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:17.545 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.546 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.804 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:09:17.804 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:17.804 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:09:17.804 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.804 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.804 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:09:17.804 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:17.804 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:09:17.804 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.804 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.804 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:09:17.804 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:17.804 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:09:17.804 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.804 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.804 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:17.804 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:17.804 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:17.804 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.804 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.804 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:17.804 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:17.804 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:17.804 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ & == \- ]] 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '&|>dvzdmhlL,V_AQckdd9;heY4={,obU!CM&YRL0Z' 00:09:17.805 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '&|>dvzdmhlL,V_AQckdd9;heY4={,obU!CM&YRL0Z' nqn.2016-06.io.spdk:cnode9842 00:09:17.805 [2024-07-15 07:45:02.550702] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9842: invalid model number '&|>dvzdmhlL,V_AQckdd9;heY4={,obU!CM&YRL0Z' 00:09:18.063 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:18.063 { 00:09:18.063 "nqn": "nqn.2016-06.io.spdk:cnode9842", 00:09:18.063 "model_number": "&|>dvzdmhlL,V_AQckdd9;heY4={,obU!CM&YRL0Z", 00:09:18.063 "method": "nvmf_create_subsystem", 00:09:18.063 "req_id": 1 00:09:18.063 } 00:09:18.063 Got JSON-RPC error response 00:09:18.063 response: 00:09:18.063 { 00:09:18.063 "code": -32602, 00:09:18.063 "message": "Invalid MN &|>dvzdmhlL,V_AQckdd9;heY4={,obU!CM&YRL0Z" 00:09:18.063 }' 00:09:18.063 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:18.063 { 00:09:18.063 "nqn": "nqn.2016-06.io.spdk:cnode9842", 00:09:18.063 "model_number": "&|>dvzdmhlL,V_AQckdd9;heY4={,obU!CM&YRL0Z", 00:09:18.063 "method": "nvmf_create_subsystem", 00:09:18.063 "req_id": 1 00:09:18.063 } 00:09:18.063 Got JSON-RPC error response 00:09:18.063 response: 00:09:18.063 { 00:09:18.063 "code": -32602, 00:09:18.063 "message": "Invalid MN &|>dvzdmhlL,V_AQckdd9;heY4={,obU!CM&YRL0Z" 00:09:18.063 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:18.063 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:18.063 [2024-07-15 07:45:02.739406] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:18.063 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:18.321 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:18.321 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:18.321 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:18.321 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:18.321 07:45:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:18.589 [2024-07-15 07:45:03.120659] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:18.590 07:45:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:18.590 { 00:09:18.590 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:18.590 "listen_address": { 00:09:18.590 "trtype": "tcp", 00:09:18.590 "traddr": "", 00:09:18.590 "trsvcid": "4421" 00:09:18.590 }, 00:09:18.590 "method": "nvmf_subsystem_remove_listener", 00:09:18.590 "req_id": 1 00:09:18.590 } 00:09:18.590 Got JSON-RPC error response 00:09:18.590 response: 00:09:18.590 { 00:09:18.590 "code": -32602, 00:09:18.590 "message": "Invalid parameters" 00:09:18.590 }' 00:09:18.590 07:45:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:18.590 { 00:09:18.590 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:18.590 "listen_address": { 00:09:18.590 "trtype": "tcp", 00:09:18.590 "traddr": "", 00:09:18.590 "trsvcid": "4421" 00:09:18.590 }, 00:09:18.590 "method": "nvmf_subsystem_remove_listener", 00:09:18.590 "req_id": 1 00:09:18.590 } 00:09:18.590 Got JSON-RPC error response 00:09:18.590 response: 00:09:18.590 { 00:09:18.590 "code": -32602, 00:09:18.590 "message": "Invalid parameters" 00:09:18.590 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:18.590 07:45:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20627 -i 0 00:09:18.590 [2024-07-15 07:45:03.305238] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20627: invalid cntlid range [0-65519] 00:09:18.590 07:45:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:18.591 { 00:09:18.591 "nqn": "nqn.2016-06.io.spdk:cnode20627", 00:09:18.591 "min_cntlid": 0, 00:09:18.591 "method": "nvmf_create_subsystem", 00:09:18.591 "req_id": 1 00:09:18.591 } 00:09:18.591 Got JSON-RPC error response 00:09:18.591 response: 00:09:18.591 { 00:09:18.591 "code": -32602, 00:09:18.591 "message": "Invalid cntlid range [0-65519]" 00:09:18.591 }' 00:09:18.591 07:45:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:18.591 { 00:09:18.591 "nqn": "nqn.2016-06.io.spdk:cnode20627", 00:09:18.591 "min_cntlid": 0, 00:09:18.591 "method": "nvmf_create_subsystem", 00:09:18.591 "req_id": 1 00:09:18.591 } 00:09:18.591 Got JSON-RPC error response 00:09:18.591 response: 00:09:18.591 { 00:09:18.591 "code": -32602, 00:09:18.591 "message": "Invalid cntlid range [0-65519]" 00:09:18.591 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:18.591 07:45:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26140 -i 65520 00:09:18.850 [2024-07-15 07:45:03.481833] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26140: invalid cntlid range [65520-65519] 00:09:18.850 07:45:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:18.850 { 00:09:18.850 "nqn": "nqn.2016-06.io.spdk:cnode26140", 00:09:18.850 "min_cntlid": 65520, 00:09:18.850 "method": "nvmf_create_subsystem", 00:09:18.850 "req_id": 1 00:09:18.850 } 00:09:18.850 Got JSON-RPC error response 00:09:18.850 response: 00:09:18.850 { 00:09:18.850 "code": -32602, 00:09:18.850 "message": "Invalid cntlid range [65520-65519]" 00:09:18.850 }' 00:09:18.850 07:45:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:18.850 { 00:09:18.850 "nqn": "nqn.2016-06.io.spdk:cnode26140", 00:09:18.850 "min_cntlid": 65520, 00:09:18.850 "method": "nvmf_create_subsystem", 00:09:18.850 "req_id": 1 00:09:18.850 } 00:09:18.850 Got JSON-RPC error response 00:09:18.850 response: 00:09:18.850 { 00:09:18.850 "code": -32602, 00:09:18.850 "message": "Invalid cntlid range [65520-65519]" 00:09:18.850 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:18.850 07:45:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21370 -I 0 00:09:19.108 [2024-07-15 07:45:03.654457] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21370: invalid cntlid range [1-0] 00:09:19.108 07:45:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:19.108 { 00:09:19.108 "nqn": "nqn.2016-06.io.spdk:cnode21370", 00:09:19.108 "max_cntlid": 0, 00:09:19.108 "method": "nvmf_create_subsystem", 00:09:19.108 "req_id": 1 00:09:19.108 } 00:09:19.108 Got JSON-RPC error response 00:09:19.108 response: 00:09:19.108 { 00:09:19.108 "code": -32602, 00:09:19.108 "message": "Invalid cntlid range [1-0]" 00:09:19.108 }' 00:09:19.108 07:45:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:19.108 { 00:09:19.108 "nqn": "nqn.2016-06.io.spdk:cnode21370", 00:09:19.108 "max_cntlid": 0, 00:09:19.108 "method": "nvmf_create_subsystem", 00:09:19.108 "req_id": 1 00:09:19.108 } 00:09:19.108 Got JSON-RPC error response 00:09:19.108 response: 00:09:19.108 { 00:09:19.108 "code": -32602, 00:09:19.108 "message": "Invalid cntlid range [1-0]" 00:09:19.108 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:19.108 07:45:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5029 -I 65520 00:09:19.108 [2024-07-15 07:45:03.847107] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5029: invalid cntlid range [1-65520] 00:09:19.365 07:45:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:19.365 { 00:09:19.365 "nqn": "nqn.2016-06.io.spdk:cnode5029", 00:09:19.365 "max_cntlid": 65520, 00:09:19.365 "method": "nvmf_create_subsystem", 00:09:19.365 "req_id": 1 00:09:19.365 } 00:09:19.365 Got JSON-RPC error response 00:09:19.365 response: 00:09:19.365 { 00:09:19.365 "code": -32602, 00:09:19.365 "message": "Invalid cntlid range [1-65520]" 00:09:19.365 }' 00:09:19.365 07:45:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:19.365 { 00:09:19.365 "nqn": "nqn.2016-06.io.spdk:cnode5029", 00:09:19.365 "max_cntlid": 65520, 00:09:19.365 "method": "nvmf_create_subsystem", 00:09:19.365 "req_id": 1 00:09:19.365 } 00:09:19.365 Got JSON-RPC error response 00:09:19.365 response: 00:09:19.365 { 00:09:19.365 "code": -32602, 00:09:19.365 "message": "Invalid cntlid range [1-65520]" 00:09:19.365 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:19.365 07:45:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27574 -i 6 -I 5 00:09:19.365 [2024-07-15 07:45:04.043796] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27574: invalid cntlid range [6-5] 00:09:19.365 07:45:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:19.365 { 00:09:19.365 "nqn": "nqn.2016-06.io.spdk:cnode27574", 00:09:19.365 "min_cntlid": 6, 00:09:19.365 "max_cntlid": 5, 00:09:19.365 "method": "nvmf_create_subsystem", 00:09:19.365 "req_id": 1 00:09:19.365 } 00:09:19.365 Got JSON-RPC error response 00:09:19.365 response: 00:09:19.365 { 00:09:19.365 "code": -32602, 00:09:19.365 "message": "Invalid cntlid range [6-5]" 00:09:19.365 }' 00:09:19.365 07:45:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:19.365 { 00:09:19.365 "nqn": "nqn.2016-06.io.spdk:cnode27574", 00:09:19.365 "min_cntlid": 6, 00:09:19.365 "max_cntlid": 5, 00:09:19.365 "method": "nvmf_create_subsystem", 00:09:19.365 "req_id": 1 00:09:19.365 } 00:09:19.365 Got JSON-RPC error response 00:09:19.365 response: 00:09:19.365 { 00:09:19.365 "code": -32602, 00:09:19.365 "message": "Invalid cntlid range [6-5]" 00:09:19.365 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:19.365 07:45:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:19.623 07:45:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:19.623 { 00:09:19.623 "name": "foobar", 00:09:19.623 "method": "nvmf_delete_target", 00:09:19.623 "req_id": 1 00:09:19.623 } 00:09:19.623 Got JSON-RPC error response 00:09:19.623 response: 00:09:19.623 { 00:09:19.623 "code": -32602, 00:09:19.623 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:19.623 }' 00:09:19.623 07:45:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:19.623 { 00:09:19.623 "name": "foobar", 00:09:19.623 "method": "nvmf_delete_target", 00:09:19.623 "req_id": 1 00:09:19.623 } 00:09:19.623 Got JSON-RPC error response 00:09:19.623 response: 00:09:19.623 { 00:09:19.623 "code": -32602, 00:09:19.623 "message": "The specified target doesn't exist, cannot delete it." 00:09:19.623 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:19.623 07:45:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:19.623 07:45:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:19.623 07:45:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:19.623 07:45:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:19.623 07:45:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:19.623 07:45:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:19.623 07:45:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:19.623 07:45:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:19.623 rmmod nvme_tcp 00:09:19.623 rmmod nvme_fabrics 00:09:19.623 rmmod nvme_keyring 00:09:19.623 07:45:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.623 07:45:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:19.623 07:45:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:19.623 07:45:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3133447 ']' 00:09:19.623 07:45:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3133447 00:09:19.623 07:45:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 3133447 ']' 00:09:19.623 07:45:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 3133447 00:09:19.623 07:45:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:09:19.623 07:45:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:19.623 07:45:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3133447 00:09:19.623 07:45:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:19.623 07:45:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:19.623 07:45:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3133447' 00:09:19.623 killing process with pid 3133447 00:09:19.623 07:45:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 3133447 00:09:19.623 07:45:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 3133447 00:09:19.882 07:45:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:19.882 07:45:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:19.882 07:45:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:19.882 07:45:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:19.882 07:45:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:19.882 07:45:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.882 07:45:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.882 07:45:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.787 07:45:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:21.787 00:09:21.787 real 0m12.118s 00:09:21.787 user 0m19.698s 00:09:21.787 sys 0m5.294s 00:09:21.787 07:45:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.787 07:45:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:21.787 ************************************ 00:09:21.787 END TEST nvmf_invalid 00:09:21.787 ************************************ 00:09:22.047 07:45:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:22.047 07:45:06 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:22.047 07:45:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:22.047 07:45:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.047 07:45:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:22.047 ************************************ 00:09:22.047 START TEST nvmf_abort 00:09:22.047 ************************************ 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:22.047 * Looking for test storage... 00:09:22.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.047 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:22.048 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:22.048 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:22.048 07:45:06 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:22.048 07:45:06 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:22.048 07:45:06 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:22.048 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:22.048 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.048 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:22.048 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:22.048 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:22.048 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.048 07:45:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.048 07:45:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.048 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:22.048 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:22.048 07:45:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:22.048 07:45:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:28.665 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:28.665 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:28.665 Found net devices under 0000:86:00.0: cvl_0_0 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:28.665 Found net devices under 0000:86:00.1: cvl_0_1 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:28.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:28.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:09:28.665 00:09:28.665 --- 10.0.0.2 ping statistics --- 00:09:28.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.665 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:28.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:28.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:09:28.665 00:09:28.665 --- 10.0.0.1 ping statistics --- 00:09:28.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.665 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3138304 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3138304 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 3138304 ']' 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:28.665 07:45:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.665 [2024-07-15 07:45:12.605795] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:09:28.666 [2024-07-15 07:45:12.605841] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.666 EAL: No free 2048 kB hugepages reported on node 1 00:09:28.666 [2024-07-15 07:45:12.677991] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:28.666 [2024-07-15 07:45:12.756359] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:28.666 [2024-07-15 07:45:12.756392] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:28.666 [2024-07-15 07:45:12.756398] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:28.666 [2024-07-15 07:45:12.756404] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:28.666 [2024-07-15 07:45:12.756409] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:28.666 [2024-07-15 07:45:12.756517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:28.666 [2024-07-15 07:45:12.756624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.666 [2024-07-15 07:45:12.756625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:28.666 07:45:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.666 07:45:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.925 [2024-07-15 07:45:13.459845] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.925 Malloc0 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.925 Delay0 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.925 [2024-07-15 07:45:13.537094] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.925 07:45:13 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:28.925 EAL: No free 2048 kB hugepages reported on node 1 00:09:28.925 [2024-07-15 07:45:13.657299] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:31.459 Initializing NVMe Controllers 00:09:31.460 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:31.460 controller IO queue size 128 less than required 00:09:31.460 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:31.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:31.460 Initialization complete. Launching workers. 00:09:31.460 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 42822 00:09:31.460 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42883, failed to submit 62 00:09:31.460 success 42826, unsuccess 57, failed 0 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:31.460 rmmod nvme_tcp 00:09:31.460 rmmod nvme_fabrics 00:09:31.460 rmmod nvme_keyring 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3138304 ']' 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3138304 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 3138304 ']' 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 3138304 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3138304 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3138304' 00:09:31.460 killing process with pid 3138304 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 3138304 00:09:31.460 07:45:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 3138304 00:09:31.460 07:45:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:31.460 07:45:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:31.460 07:45:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:31.460 07:45:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:31.460 07:45:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:31.460 07:45:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.460 07:45:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:31.460 07:45:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.999 07:45:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:33.999 00:09:33.999 real 0m11.568s 00:09:33.999 user 0m13.312s 00:09:33.999 sys 0m5.395s 00:09:33.999 07:45:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:33.999 07:45:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:33.999 ************************************ 00:09:33.999 END TEST nvmf_abort 00:09:33.999 ************************************ 00:09:33.999 07:45:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:33.999 07:45:18 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:33.999 07:45:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:33.999 07:45:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.999 07:45:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:33.999 ************************************ 00:09:33.999 START TEST nvmf_ns_hotplug_stress 00:09:33.999 ************************************ 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:33.999 * Looking for test storage... 00:09:33.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:33.999 07:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:39.276 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:39.276 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:39.276 Found net devices under 0000:86:00.0: cvl_0_0 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:39.276 Found net devices under 0000:86:00.1: cvl_0_1 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:39.276 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:39.277 07:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:39.277 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:39.277 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:39.277 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:39.277 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:39.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:39.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:09:39.535 00:09:39.535 --- 10.0.0.2 ping statistics --- 00:09:39.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.535 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:39.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:39.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:09:39.535 00:09:39.535 --- 10.0.0.1 ping statistics --- 00:09:39.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.535 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3142326 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3142326 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 3142326 ']' 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:39.535 07:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:39.535 [2024-07-15 07:45:24.222920] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:09:39.535 [2024-07-15 07:45:24.222960] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.535 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.795 [2024-07-15 07:45:24.295311] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:39.795 [2024-07-15 07:45:24.371394] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.795 [2024-07-15 07:45:24.371431] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.795 [2024-07-15 07:45:24.371438] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.795 [2024-07-15 07:45:24.371444] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.795 [2024-07-15 07:45:24.371449] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.795 [2024-07-15 07:45:24.371576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.795 [2024-07-15 07:45:24.371680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.795 [2024-07-15 07:45:24.371681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.362 07:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:40.362 07:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:09:40.362 07:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:40.362 07:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:40.363 07:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:40.363 07:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:40.363 07:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:40.363 07:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:40.621 [2024-07-15 07:45:25.228686] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:40.622 07:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:40.881 07:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:40.881 [2024-07-15 07:45:25.598077] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:40.881 07:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:41.140 07:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:41.399 Malloc0 00:09:41.399 07:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:41.399 Delay0 00:09:41.658 07:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.658 07:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:41.918 NULL1 00:09:41.918 07:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:42.177 07:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:42.177 07:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3142813 00:09:42.177 07:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:42.177 07:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.177 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.177 07:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.436 07:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:42.436 07:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:42.695 true 00:09:42.695 07:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:42.695 07:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.954 07:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.954 07:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:42.954 07:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:43.213 true 00:09:43.213 07:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:43.213 07:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.472 07:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.731 07:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:43.731 07:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:43.731 true 00:09:43.731 07:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:43.731 07:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.990 07:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.248 07:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:44.248 07:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:44.248 true 00:09:44.507 07:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:44.507 07:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.507 07:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.766 07:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:44.766 07:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:45.025 true 00:09:45.025 07:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:45.025 07:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.284 07:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.284 07:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:45.284 07:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:45.543 true 00:09:45.543 07:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:45.543 07:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.801 07:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.060 07:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:46.060 07:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:46.060 true 00:09:46.060 07:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:46.060 07:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.319 07:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.578 07:45:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:46.578 07:45:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:46.836 true 00:09:46.836 07:45:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:46.836 07:45:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.836 07:45:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.093 07:45:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:47.093 07:45:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:47.351 true 00:09:47.351 07:45:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:47.351 07:45:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.610 07:45:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.610 07:45:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:47.610 07:45:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:47.869 true 00:09:47.869 07:45:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:47.869 07:45:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.127 07:45:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.396 07:45:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:48.396 07:45:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:48.396 true 00:09:48.680 07:45:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:48.680 07:45:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.680 07:45:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.937 07:45:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:48.937 07:45:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:49.196 true 00:09:49.196 07:45:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:49.196 07:45:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.454 07:45:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.454 07:45:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:49.454 07:45:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:49.713 true 00:09:49.713 07:45:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:49.713 07:45:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.971 07:45:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.230 07:45:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:50.230 07:45:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:50.230 true 00:09:50.230 07:45:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:50.230 07:45:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.489 07:45:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.748 07:45:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:50.748 07:45:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:51.007 true 00:09:51.007 07:45:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:51.007 07:45:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.007 07:45:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.266 07:45:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:51.266 07:45:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:51.525 true 00:09:51.525 07:45:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:51.525 07:45:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.785 07:45:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.785 07:45:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:51.785 07:45:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:52.043 true 00:09:52.043 07:45:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:52.043 07:45:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.302 07:45:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.560 07:45:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:52.560 07:45:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:52.560 true 00:09:52.819 07:45:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:52.819 07:45:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.819 07:45:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.078 07:45:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:53.078 07:45:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:53.335 true 00:09:53.335 07:45:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:53.335 07:45:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.594 07:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.594 07:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:53.594 07:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:53.852 true 00:09:53.852 07:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:53.852 07:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.111 07:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.370 07:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:54.370 07:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:54.370 true 00:09:54.630 07:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:54.630 07:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.630 07:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.888 07:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:54.888 07:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:55.147 true 00:09:55.147 07:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:55.147 07:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.406 07:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.406 07:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:55.406 07:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:55.664 true 00:09:55.664 07:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:55.664 07:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.923 07:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.182 07:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:56.182 07:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:56.182 true 00:09:56.182 07:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:56.182 07:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.440 07:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.697 07:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:56.697 07:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:56.957 true 00:09:56.957 07:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:56.957 07:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.957 07:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.217 07:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:57.217 07:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:57.475 true 00:09:57.475 07:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:57.475 07:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.734 07:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.734 07:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:57.734 07:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:57.993 true 00:09:57.993 07:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:57.993 07:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.253 07:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.513 07:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:58.513 07:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:58.513 true 00:09:58.771 07:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:58.771 07:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.771 07:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.029 07:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:59.029 07:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:59.287 true 00:09:59.287 07:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:59.287 07:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.545 07:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.545 07:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:59.545 07:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:59.804 true 00:09:59.804 07:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:09:59.804 07:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.063 07:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.322 07:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:00.322 07:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:00.322 true 00:10:00.581 07:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:10:00.581 07:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.581 07:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.840 07:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:00.840 07:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:01.099 true 00:10:01.099 07:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:10:01.099 07:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.358 07:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.358 07:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:01.358 07:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:01.617 true 00:10:01.617 07:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:10:01.617 07:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.876 07:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.136 07:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:02.136 07:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:02.136 true 00:10:02.136 07:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:10:02.136 07:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.395 07:45:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.677 07:45:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:02.677 07:45:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:02.677 true 00:10:02.936 07:45:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:10:02.936 07:45:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.936 07:45:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.196 07:45:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:10:03.196 07:45:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:03.455 true 00:10:03.455 07:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:10:03.455 07:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.715 07:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.715 07:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:10:03.715 07:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:03.974 true 00:10:03.974 07:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:10:03.974 07:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.233 07:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.497 07:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:10:04.497 07:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:04.497 true 00:10:04.497 07:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:10:04.497 07:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.825 07:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.084 07:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:10:05.084 07:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:05.084 true 00:10:05.343 07:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:10:05.343 07:45:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.343 07:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.602 07:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:10:05.602 07:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:05.860 true 00:10:05.860 07:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:10:05.861 07:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.119 07:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.119 07:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:10:06.119 07:45:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:06.376 true 00:10:06.376 07:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:10:06.376 07:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.633 07:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.891 07:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:10:06.891 07:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:06.891 true 00:10:06.891 07:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:10:06.891 07:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.150 07:45:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.409 07:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:10:07.409 07:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:07.668 true 00:10:07.668 07:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:10:07.668 07:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.668 07:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.928 07:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:10:07.928 07:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:08.187 true 00:10:08.187 07:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:10:08.187 07:45:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.446 07:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.716 07:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:10:08.716 07:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:08.716 true 00:10:08.716 07:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:10:08.716 07:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.976 07:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.234 07:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:10:09.234 07:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:09.234 true 00:10:09.491 07:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:10:09.491 07:45:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.491 07:45:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.750 07:45:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:10:09.750 07:45:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:10.009 true 00:10:10.009 07:45:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:10:10.009 07:45:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.268 07:45:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.268 07:45:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:10:10.268 07:45:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:10.526 true 00:10:10.526 07:45:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:10:10.526 07:45:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.784 07:45:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.042 07:45:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:10:11.043 07:45:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:11.043 true 00:10:11.301 07:45:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:10:11.301 07:45:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.301 07:45:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.559 07:45:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:10:11.559 07:45:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:11.818 true 00:10:11.818 07:45:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:10:11.818 07:45:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.076 07:45:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.076 07:45:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:10:12.076 07:45:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:12.335 true 00:10:12.335 07:45:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:10:12.335 07:45:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.593 Initializing NVMe Controllers 00:10:12.593 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:12.594 Controller IO queue size 128, less than required. 00:10:12.594 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:12.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:12.594 Initialization complete. Launching workers. 00:10:12.594 ======================================================== 00:10:12.594 Latency(us) 00:10:12.594 Device Information : IOPS MiB/s Average min max 00:10:12.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27282.83 13.32 4691.55 2442.27 9344.90 00:10:12.594 ======================================================== 00:10:12.594 Total : 27282.83 13.32 4691.55 2442.27 9344.90 00:10:12.594 00:10:12.594 07:45:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.594 07:45:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:10:12.594 07:45:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:10:12.852 true 00:10:12.852 07:45:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3142813 00:10:12.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3142813) - No such process 00:10:12.852 07:45:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3142813 00:10:12.852 07:45:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.111 07:45:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:13.369 07:45:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:13.369 07:45:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:13.369 07:45:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:13.369 07:45:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:13.369 07:45:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:13.369 null0 00:10:13.369 07:45:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:13.369 07:45:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:13.369 07:45:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:13.627 null1 00:10:13.627 07:45:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:13.627 07:45:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:13.627 07:45:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:13.886 null2 00:10:13.886 07:45:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:13.886 07:45:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:13.886 07:45:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:13.886 null3 00:10:13.886 07:45:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:13.886 07:45:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:13.886 07:45:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:14.145 null4 00:10:14.146 07:45:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:14.146 07:45:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:14.146 07:45:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:14.404 null5 00:10:14.405 07:45:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:14.405 07:45:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:14.405 07:45:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:14.405 null6 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:14.664 null7 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:14.664 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:14.665 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:14.665 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:14.665 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.665 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:14.665 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:14.665 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:14.665 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:14.665 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:14.665 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:14.665 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:14.665 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:14.665 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:14.665 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:14.665 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.665 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3148256 3148257 3148258 3148261 3148263 3148264 3148266 3148269 00:10:14.665 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:14.665 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:14.665 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:14.665 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:14.665 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.665 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:14.924 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:14.924 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.924 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:14.924 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:14.924 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:14.924 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:14.924 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:14.924 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:15.183 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:15.442 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:15.442 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:15.442 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.442 07:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:15.442 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.442 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.442 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:15.442 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.442 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.442 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.442 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:15.442 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.442 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:15.442 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.442 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.442 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:15.442 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.442 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.442 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:15.442 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.442 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.442 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:15.442 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.442 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.442 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:15.442 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.442 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.442 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:15.702 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:15.702 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:15.702 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:15.702 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:15.702 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:15.702 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:15.702 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.702 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.961 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:16.219 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.219 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.219 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:16.219 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.219 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.219 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:16.219 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.219 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.219 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.219 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.219 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:16.219 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:16.219 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.219 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.219 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:16.219 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.219 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.219 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:16.219 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.219 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.219 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:16.219 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.219 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.219 07:46:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:16.478 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:16.478 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:16.478 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:16.478 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:16.478 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:16.478 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.478 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:16.478 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:16.737 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.737 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.737 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:16.737 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.737 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.737 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:16.737 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.737 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.737 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:16.737 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.737 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.737 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.737 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.737 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:16.737 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:16.737 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.737 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.738 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:16.738 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.738 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.738 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:16.738 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.738 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.738 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:16.738 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:16.738 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:16.738 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:16.738 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:16.738 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:16.738 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:16.738 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:16.738 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.997 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.997 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.997 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:16.997 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.997 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.997 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:16.997 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.997 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.997 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:16.997 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.997 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.997 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:16.997 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.997 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.997 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:16.997 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.997 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.997 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:16.997 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.997 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.997 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:16.997 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.997 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.997 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:17.256 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:17.256 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:17.256 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:17.256 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.256 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:17.256 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:17.256 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:17.256 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:17.256 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.256 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.256 07:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:17.256 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.256 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.256 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:17.514 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.514 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.514 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:17.514 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.514 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.514 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:17.514 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.515 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.515 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.515 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:17.515 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.515 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:17.515 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.515 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.515 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:17.515 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.515 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.515 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:17.515 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:17.515 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:17.515 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:17.515 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:17.515 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:17.515 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.515 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:17.515 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:17.773 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.773 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.773 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:17.773 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.773 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.773 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:17.773 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.773 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.773 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:17.773 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.773 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.773 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:17.773 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.773 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.773 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:17.773 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.773 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.773 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:17.773 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.773 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.773 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:17.773 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.773 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.773 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:17.773 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:18.033 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:18.033 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:18.033 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:18.033 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.033 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:18.033 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:18.033 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:18.033 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.033 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.033 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:18.033 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.033 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.033 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:18.033 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.033 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.033 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:18.291 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.292 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.292 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:18.292 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.292 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.292 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:18.292 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.292 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.292 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.292 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.292 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:18.292 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:18.292 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.292 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.292 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:18.292 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:18.292 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:18.292 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:18.292 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:18.292 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:18.292 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.292 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:18.292 07:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:18.292 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.292 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:18.551 rmmod nvme_tcp 00:10:18.551 rmmod nvme_fabrics 00:10:18.551 rmmod nvme_keyring 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3142326 ']' 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3142326 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 3142326 ']' 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 3142326 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3142326 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3142326' 00:10:18.551 killing process with pid 3142326 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 3142326 00:10:18.551 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 3142326 00:10:18.809 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:18.809 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:18.809 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:18.809 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:18.809 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:18.809 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.809 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:18.809 07:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.343 07:46:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:21.343 00:10:21.343 real 0m47.265s 00:10:21.343 user 3m18.829s 00:10:21.343 sys 0m17.013s 00:10:21.343 07:46:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:21.343 07:46:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.343 ************************************ 00:10:21.343 END TEST nvmf_ns_hotplug_stress 00:10:21.343 ************************************ 00:10:21.343 07:46:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:21.343 07:46:05 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:21.343 07:46:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:21.343 07:46:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:21.343 07:46:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:21.343 ************************************ 00:10:21.343 START TEST nvmf_connect_stress 00:10:21.343 ************************************ 00:10:21.343 07:46:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:21.343 * Looking for test storage... 00:10:21.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:21.343 07:46:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:21.343 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:21.343 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.343 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.343 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.343 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.343 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.343 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.343 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.343 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.343 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.343 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.343 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:21.343 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:21.343 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.343 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.343 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:21.343 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:21.344 07:46:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:26.673 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:26.673 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:26.673 Found net devices under 0000:86:00.0: cvl_0_0 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:26.673 Found net devices under 0000:86:00.1: cvl_0_1 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:26.673 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:26.674 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:26.933 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:26.933 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:26.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:10:26.933 00:10:26.933 --- 10.0.0.2 ping statistics --- 00:10:26.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.933 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:10:26.933 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:26.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:26.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:10:26.933 00:10:26.933 --- 10.0.0.1 ping statistics --- 00:10:26.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.933 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:10:26.933 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:26.933 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:26.933 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:26.933 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:26.933 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:26.933 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:26.933 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:26.933 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:26.933 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:26.933 07:46:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:26.933 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:26.933 07:46:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:26.933 07:46:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.933 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3152628 00:10:26.933 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:26.933 07:46:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3152628 00:10:26.933 07:46:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 3152628 ']' 00:10:26.933 07:46:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.933 07:46:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:26.933 07:46:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.933 07:46:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:26.933 07:46:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.933 [2024-07-15 07:46:11.526211] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:10:26.933 [2024-07-15 07:46:11.526256] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.933 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.933 [2024-07-15 07:46:11.596651] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:26.933 [2024-07-15 07:46:11.675016] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.933 [2024-07-15 07:46:11.675053] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.933 [2024-07-15 07:46:11.675059] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.933 [2024-07-15 07:46:11.675065] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.933 [2024-07-15 07:46:11.675070] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.933 [2024-07-15 07:46:11.675191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.933 [2024-07-15 07:46:11.675296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.933 [2024-07-15 07:46:11.675297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:27.870 07:46:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.871 [2024-07-15 07:46:12.391728] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.871 [2024-07-15 07:46:12.415939] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.871 NULL1 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3152868 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:27.871 EAL: No free 2048 kB hugepages reported on node 1 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.871 07:46:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.130 07:46:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.130 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:28.130 07:46:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.130 07:46:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.130 07:46:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.695 07:46:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.695 07:46:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:28.695 07:46:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.695 07:46:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.695 07:46:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.952 07:46:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.952 07:46:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:28.952 07:46:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.952 07:46:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.952 07:46:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.209 07:46:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.209 07:46:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:29.209 07:46:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.209 07:46:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.209 07:46:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.466 07:46:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.466 07:46:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:29.466 07:46:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.466 07:46:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.466 07:46:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.724 07:46:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.724 07:46:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:29.724 07:46:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.724 07:46:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.724 07:46:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.291 07:46:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.291 07:46:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:30.291 07:46:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.291 07:46:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.291 07:46:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.549 07:46:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.549 07:46:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:30.549 07:46:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.549 07:46:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.549 07:46:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.807 07:46:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.807 07:46:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:30.807 07:46:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.807 07:46:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.807 07:46:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:31.066 07:46:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.066 07:46:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:31.066 07:46:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:31.066 07:46:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.066 07:46:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:31.324 07:46:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.324 07:46:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:31.324 07:46:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:31.324 07:46:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.324 07:46:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:31.890 07:46:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.890 07:46:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:31.890 07:46:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:31.890 07:46:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.890 07:46:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:32.148 07:46:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.148 07:46:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:32.148 07:46:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:32.148 07:46:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.148 07:46:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:32.406 07:46:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.407 07:46:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:32.407 07:46:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:32.407 07:46:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.407 07:46:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:32.665 07:46:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.665 07:46:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:32.665 07:46:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:32.665 07:46:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.665 07:46:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:33.231 07:46:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.231 07:46:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:33.231 07:46:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:33.231 07:46:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.231 07:46:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:33.489 07:46:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.489 07:46:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:33.489 07:46:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:33.489 07:46:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.489 07:46:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:33.747 07:46:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.747 07:46:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:33.747 07:46:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:33.747 07:46:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.747 07:46:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:34.005 07:46:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.005 07:46:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:34.005 07:46:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:34.005 07:46:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.005 07:46:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:34.262 07:46:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.263 07:46:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:34.263 07:46:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:34.263 07:46:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.263 07:46:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:34.828 07:46:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.828 07:46:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:34.828 07:46:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:34.829 07:46:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.829 07:46:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:35.087 07:46:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.087 07:46:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:35.087 07:46:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:35.087 07:46:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.087 07:46:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:35.346 07:46:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.346 07:46:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:35.346 07:46:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:35.346 07:46:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.346 07:46:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:35.605 07:46:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.605 07:46:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:35.605 07:46:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:35.605 07:46:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.605 07:46:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.173 07:46:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.173 07:46:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:36.173 07:46:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:36.173 07:46:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.173 07:46:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.432 07:46:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.432 07:46:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:36.432 07:46:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:36.432 07:46:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.432 07:46:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.691 07:46:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.691 07:46:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:36.691 07:46:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:36.691 07:46:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.691 07:46:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.950 07:46:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.950 07:46:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:36.950 07:46:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:36.950 07:46:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.950 07:46:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:37.209 07:46:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.209 07:46:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:37.209 07:46:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:37.209 07:46:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.209 07:46:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:37.776 07:46:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.776 07:46:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:37.776 07:46:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:37.776 07:46:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.776 07:46:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:38.035 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3152868 00:10:38.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3152868) - No such process 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3152868 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:38.035 rmmod nvme_tcp 00:10:38.035 rmmod nvme_fabrics 00:10:38.035 rmmod nvme_keyring 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3152628 ']' 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3152628 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 3152628 ']' 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 3152628 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3152628 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3152628' 00:10:38.035 killing process with pid 3152628 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 3152628 00:10:38.035 07:46:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 3152628 00:10:38.294 07:46:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:38.294 07:46:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:38.294 07:46:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:38.294 07:46:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:38.294 07:46:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:38.294 07:46:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.294 07:46:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:38.294 07:46:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.198 07:46:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:40.457 00:10:40.457 real 0m19.372s 00:10:40.457 user 0m41.245s 00:10:40.457 sys 0m8.211s 00:10:40.457 07:46:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:40.457 07:46:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:40.457 ************************************ 00:10:40.457 END TEST nvmf_connect_stress 00:10:40.457 ************************************ 00:10:40.457 07:46:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:40.457 07:46:24 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:40.457 07:46:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:40.457 07:46:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.457 07:46:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:40.457 ************************************ 00:10:40.457 START TEST nvmf_fused_ordering 00:10:40.457 ************************************ 00:10:40.457 07:46:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:40.457 * Looking for test storage... 00:10:40.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:40.458 07:46:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:47.090 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:47.090 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:10:47.090 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:47.090 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:47.090 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:47.090 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:47.090 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:47.090 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:10:47.090 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:47.090 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:10:47.090 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:10:47.090 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:47.091 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:47.091 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:47.091 Found net devices under 0000:86:00.0: cvl_0_0 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:47.091 Found net devices under 0000:86:00.1: cvl_0_1 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:47.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:10:47.091 00:10:47.091 --- 10.0.0.2 ping statistics --- 00:10:47.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.091 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:47.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:10:47.091 00:10:47.091 --- 10.0.0.1 ping statistics --- 00:10:47.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.091 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3158024 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3158024 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 3158024 ']' 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:47.091 07:46:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:47.091 [2024-07-15 07:46:30.997076] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:10:47.091 [2024-07-15 07:46:30.997120] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.091 EAL: No free 2048 kB hugepages reported on node 1 00:10:47.091 [2024-07-15 07:46:31.067235] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.091 [2024-07-15 07:46:31.144329] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.091 [2024-07-15 07:46:31.144367] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.091 [2024-07-15 07:46:31.144374] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.091 [2024-07-15 07:46:31.144380] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.092 [2024-07-15 07:46:31.144385] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.092 [2024-07-15 07:46:31.144405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.092 07:46:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:47.092 07:46:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:10:47.092 07:46:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:47.092 07:46:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:47.092 07:46:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:47.352 [2024-07-15 07:46:31.847087] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:47.352 [2024-07-15 07:46:31.867264] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:47.352 NULL1 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.352 07:46:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:47.352 [2024-07-15 07:46:31.922723] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:10:47.352 [2024-07-15 07:46:31.922766] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3158243 ] 00:10:47.352 EAL: No free 2048 kB hugepages reported on node 1 00:10:47.612 Attached to nqn.2016-06.io.spdk:cnode1 00:10:47.612 Namespace ID: 1 size: 1GB 00:10:47.612 fused_ordering(0) 00:10:47.612 fused_ordering(1) 00:10:47.612 fused_ordering(2) 00:10:47.612 fused_ordering(3) 00:10:47.612 fused_ordering(4) 00:10:47.612 fused_ordering(5) 00:10:47.612 fused_ordering(6) 00:10:47.612 fused_ordering(7) 00:10:47.612 fused_ordering(8) 00:10:47.612 fused_ordering(9) 00:10:47.612 fused_ordering(10) 00:10:47.612 fused_ordering(11) 00:10:47.612 fused_ordering(12) 00:10:47.612 fused_ordering(13) 00:10:47.612 fused_ordering(14) 00:10:47.612 fused_ordering(15) 00:10:47.612 fused_ordering(16) 00:10:47.612 fused_ordering(17) 00:10:47.612 fused_ordering(18) 00:10:47.612 fused_ordering(19) 00:10:47.612 fused_ordering(20) 00:10:47.612 fused_ordering(21) 00:10:47.612 fused_ordering(22) 00:10:47.612 fused_ordering(23) 00:10:47.612 fused_ordering(24) 00:10:47.612 fused_ordering(25) 00:10:47.612 fused_ordering(26) 00:10:47.612 fused_ordering(27) 00:10:47.612 fused_ordering(28) 00:10:47.612 fused_ordering(29) 00:10:47.612 fused_ordering(30) 00:10:47.612 fused_ordering(31) 00:10:47.612 fused_ordering(32) 00:10:47.612 fused_ordering(33) 00:10:47.612 fused_ordering(34) 00:10:47.612 fused_ordering(35) 00:10:47.612 fused_ordering(36) 00:10:47.612 fused_ordering(37) 00:10:47.612 fused_ordering(38) 00:10:47.612 fused_ordering(39) 00:10:47.612 fused_ordering(40) 00:10:47.612 fused_ordering(41) 00:10:47.612 fused_ordering(42) 00:10:47.612 fused_ordering(43) 00:10:47.612 fused_ordering(44) 00:10:47.612 fused_ordering(45) 00:10:47.612 fused_ordering(46) 00:10:47.612 fused_ordering(47) 00:10:47.612 fused_ordering(48) 00:10:47.612 fused_ordering(49) 00:10:47.612 fused_ordering(50) 00:10:47.612 fused_ordering(51) 00:10:47.612 fused_ordering(52) 00:10:47.612 fused_ordering(53) 00:10:47.612 fused_ordering(54) 00:10:47.612 fused_ordering(55) 00:10:47.612 fused_ordering(56) 00:10:47.612 fused_ordering(57) 00:10:47.612 fused_ordering(58) 00:10:47.612 fused_ordering(59) 00:10:47.612 fused_ordering(60) 00:10:47.612 fused_ordering(61) 00:10:47.612 fused_ordering(62) 00:10:47.612 fused_ordering(63) 00:10:47.612 fused_ordering(64) 00:10:47.612 fused_ordering(65) 00:10:47.612 fused_ordering(66) 00:10:47.612 fused_ordering(67) 00:10:47.612 fused_ordering(68) 00:10:47.612 fused_ordering(69) 00:10:47.612 fused_ordering(70) 00:10:47.612 fused_ordering(71) 00:10:47.612 fused_ordering(72) 00:10:47.612 fused_ordering(73) 00:10:47.612 fused_ordering(74) 00:10:47.612 fused_ordering(75) 00:10:47.612 fused_ordering(76) 00:10:47.612 fused_ordering(77) 00:10:47.612 fused_ordering(78) 00:10:47.612 fused_ordering(79) 00:10:47.612 fused_ordering(80) 00:10:47.612 fused_ordering(81) 00:10:47.612 fused_ordering(82) 00:10:47.612 fused_ordering(83) 00:10:47.612 fused_ordering(84) 00:10:47.612 fused_ordering(85) 00:10:47.612 fused_ordering(86) 00:10:47.612 fused_ordering(87) 00:10:47.612 fused_ordering(88) 00:10:47.612 fused_ordering(89) 00:10:47.612 fused_ordering(90) 00:10:47.612 fused_ordering(91) 00:10:47.612 fused_ordering(92) 00:10:47.612 fused_ordering(93) 00:10:47.612 fused_ordering(94) 00:10:47.612 fused_ordering(95) 00:10:47.612 fused_ordering(96) 00:10:47.612 fused_ordering(97) 00:10:47.612 fused_ordering(98) 00:10:47.612 fused_ordering(99) 00:10:47.612 fused_ordering(100) 00:10:47.612 fused_ordering(101) 00:10:47.612 fused_ordering(102) 00:10:47.612 fused_ordering(103) 00:10:47.612 fused_ordering(104) 00:10:47.612 fused_ordering(105) 00:10:47.612 fused_ordering(106) 00:10:47.612 fused_ordering(107) 00:10:47.612 fused_ordering(108) 00:10:47.612 fused_ordering(109) 00:10:47.612 fused_ordering(110) 00:10:47.612 fused_ordering(111) 00:10:47.612 fused_ordering(112) 00:10:47.612 fused_ordering(113) 00:10:47.612 fused_ordering(114) 00:10:47.612 fused_ordering(115) 00:10:47.612 fused_ordering(116) 00:10:47.612 fused_ordering(117) 00:10:47.612 fused_ordering(118) 00:10:47.612 fused_ordering(119) 00:10:47.612 fused_ordering(120) 00:10:47.612 fused_ordering(121) 00:10:47.612 fused_ordering(122) 00:10:47.612 fused_ordering(123) 00:10:47.612 fused_ordering(124) 00:10:47.612 fused_ordering(125) 00:10:47.612 fused_ordering(126) 00:10:47.612 fused_ordering(127) 00:10:47.612 fused_ordering(128) 00:10:47.612 fused_ordering(129) 00:10:47.612 fused_ordering(130) 00:10:47.612 fused_ordering(131) 00:10:47.612 fused_ordering(132) 00:10:47.612 fused_ordering(133) 00:10:47.612 fused_ordering(134) 00:10:47.612 fused_ordering(135) 00:10:47.612 fused_ordering(136) 00:10:47.612 fused_ordering(137) 00:10:47.612 fused_ordering(138) 00:10:47.612 fused_ordering(139) 00:10:47.612 fused_ordering(140) 00:10:47.612 fused_ordering(141) 00:10:47.612 fused_ordering(142) 00:10:47.612 fused_ordering(143) 00:10:47.612 fused_ordering(144) 00:10:47.612 fused_ordering(145) 00:10:47.612 fused_ordering(146) 00:10:47.612 fused_ordering(147) 00:10:47.612 fused_ordering(148) 00:10:47.612 fused_ordering(149) 00:10:47.612 fused_ordering(150) 00:10:47.612 fused_ordering(151) 00:10:47.612 fused_ordering(152) 00:10:47.612 fused_ordering(153) 00:10:47.612 fused_ordering(154) 00:10:47.612 fused_ordering(155) 00:10:47.612 fused_ordering(156) 00:10:47.612 fused_ordering(157) 00:10:47.612 fused_ordering(158) 00:10:47.612 fused_ordering(159) 00:10:47.612 fused_ordering(160) 00:10:47.612 fused_ordering(161) 00:10:47.612 fused_ordering(162) 00:10:47.612 fused_ordering(163) 00:10:47.612 fused_ordering(164) 00:10:47.612 fused_ordering(165) 00:10:47.612 fused_ordering(166) 00:10:47.612 fused_ordering(167) 00:10:47.612 fused_ordering(168) 00:10:47.612 fused_ordering(169) 00:10:47.612 fused_ordering(170) 00:10:47.612 fused_ordering(171) 00:10:47.612 fused_ordering(172) 00:10:47.612 fused_ordering(173) 00:10:47.612 fused_ordering(174) 00:10:47.612 fused_ordering(175) 00:10:47.612 fused_ordering(176) 00:10:47.612 fused_ordering(177) 00:10:47.612 fused_ordering(178) 00:10:47.612 fused_ordering(179) 00:10:47.612 fused_ordering(180) 00:10:47.612 fused_ordering(181) 00:10:47.612 fused_ordering(182) 00:10:47.612 fused_ordering(183) 00:10:47.612 fused_ordering(184) 00:10:47.612 fused_ordering(185) 00:10:47.612 fused_ordering(186) 00:10:47.612 fused_ordering(187) 00:10:47.612 fused_ordering(188) 00:10:47.612 fused_ordering(189) 00:10:47.612 fused_ordering(190) 00:10:47.612 fused_ordering(191) 00:10:47.612 fused_ordering(192) 00:10:47.612 fused_ordering(193) 00:10:47.612 fused_ordering(194) 00:10:47.612 fused_ordering(195) 00:10:47.612 fused_ordering(196) 00:10:47.612 fused_ordering(197) 00:10:47.612 fused_ordering(198) 00:10:47.612 fused_ordering(199) 00:10:47.612 fused_ordering(200) 00:10:47.612 fused_ordering(201) 00:10:47.612 fused_ordering(202) 00:10:47.612 fused_ordering(203) 00:10:47.612 fused_ordering(204) 00:10:47.612 fused_ordering(205) 00:10:47.872 fused_ordering(206) 00:10:47.872 fused_ordering(207) 00:10:47.872 fused_ordering(208) 00:10:47.872 fused_ordering(209) 00:10:47.872 fused_ordering(210) 00:10:47.872 fused_ordering(211) 00:10:47.872 fused_ordering(212) 00:10:47.872 fused_ordering(213) 00:10:47.872 fused_ordering(214) 00:10:47.872 fused_ordering(215) 00:10:47.872 fused_ordering(216) 00:10:47.872 fused_ordering(217) 00:10:47.872 fused_ordering(218) 00:10:47.872 fused_ordering(219) 00:10:47.872 fused_ordering(220) 00:10:47.872 fused_ordering(221) 00:10:47.872 fused_ordering(222) 00:10:47.872 fused_ordering(223) 00:10:47.872 fused_ordering(224) 00:10:47.872 fused_ordering(225) 00:10:47.872 fused_ordering(226) 00:10:47.872 fused_ordering(227) 00:10:47.872 fused_ordering(228) 00:10:47.872 fused_ordering(229) 00:10:47.872 fused_ordering(230) 00:10:47.872 fused_ordering(231) 00:10:47.872 fused_ordering(232) 00:10:47.872 fused_ordering(233) 00:10:47.872 fused_ordering(234) 00:10:47.872 fused_ordering(235) 00:10:47.872 fused_ordering(236) 00:10:47.872 fused_ordering(237) 00:10:47.872 fused_ordering(238) 00:10:47.872 fused_ordering(239) 00:10:47.872 fused_ordering(240) 00:10:47.872 fused_ordering(241) 00:10:47.872 fused_ordering(242) 00:10:47.872 fused_ordering(243) 00:10:47.872 fused_ordering(244) 00:10:47.872 fused_ordering(245) 00:10:47.872 fused_ordering(246) 00:10:47.872 fused_ordering(247) 00:10:47.872 fused_ordering(248) 00:10:47.872 fused_ordering(249) 00:10:47.872 fused_ordering(250) 00:10:47.872 fused_ordering(251) 00:10:47.872 fused_ordering(252) 00:10:47.872 fused_ordering(253) 00:10:47.872 fused_ordering(254) 00:10:47.872 fused_ordering(255) 00:10:47.872 fused_ordering(256) 00:10:47.872 fused_ordering(257) 00:10:47.872 fused_ordering(258) 00:10:47.872 fused_ordering(259) 00:10:47.872 fused_ordering(260) 00:10:47.872 fused_ordering(261) 00:10:47.872 fused_ordering(262) 00:10:47.872 fused_ordering(263) 00:10:47.872 fused_ordering(264) 00:10:47.872 fused_ordering(265) 00:10:47.872 fused_ordering(266) 00:10:47.872 fused_ordering(267) 00:10:47.872 fused_ordering(268) 00:10:47.872 fused_ordering(269) 00:10:47.872 fused_ordering(270) 00:10:47.872 fused_ordering(271) 00:10:47.872 fused_ordering(272) 00:10:47.872 fused_ordering(273) 00:10:47.872 fused_ordering(274) 00:10:47.872 fused_ordering(275) 00:10:47.872 fused_ordering(276) 00:10:47.872 fused_ordering(277) 00:10:47.872 fused_ordering(278) 00:10:47.872 fused_ordering(279) 00:10:47.872 fused_ordering(280) 00:10:47.872 fused_ordering(281) 00:10:47.872 fused_ordering(282) 00:10:47.872 fused_ordering(283) 00:10:47.872 fused_ordering(284) 00:10:47.872 fused_ordering(285) 00:10:47.872 fused_ordering(286) 00:10:47.872 fused_ordering(287) 00:10:47.872 fused_ordering(288) 00:10:47.872 fused_ordering(289) 00:10:47.872 fused_ordering(290) 00:10:47.872 fused_ordering(291) 00:10:47.872 fused_ordering(292) 00:10:47.872 fused_ordering(293) 00:10:47.872 fused_ordering(294) 00:10:47.872 fused_ordering(295) 00:10:47.872 fused_ordering(296) 00:10:47.872 fused_ordering(297) 00:10:47.872 fused_ordering(298) 00:10:47.872 fused_ordering(299) 00:10:47.872 fused_ordering(300) 00:10:47.872 fused_ordering(301) 00:10:47.872 fused_ordering(302) 00:10:47.872 fused_ordering(303) 00:10:47.872 fused_ordering(304) 00:10:47.872 fused_ordering(305) 00:10:47.872 fused_ordering(306) 00:10:47.872 fused_ordering(307) 00:10:47.872 fused_ordering(308) 00:10:47.872 fused_ordering(309) 00:10:47.872 fused_ordering(310) 00:10:47.872 fused_ordering(311) 00:10:47.872 fused_ordering(312) 00:10:47.872 fused_ordering(313) 00:10:47.872 fused_ordering(314) 00:10:47.872 fused_ordering(315) 00:10:47.872 fused_ordering(316) 00:10:47.872 fused_ordering(317) 00:10:47.872 fused_ordering(318) 00:10:47.872 fused_ordering(319) 00:10:47.872 fused_ordering(320) 00:10:47.872 fused_ordering(321) 00:10:47.872 fused_ordering(322) 00:10:47.872 fused_ordering(323) 00:10:47.872 fused_ordering(324) 00:10:47.872 fused_ordering(325) 00:10:47.872 fused_ordering(326) 00:10:47.872 fused_ordering(327) 00:10:47.872 fused_ordering(328) 00:10:47.872 fused_ordering(329) 00:10:47.872 fused_ordering(330) 00:10:47.872 fused_ordering(331) 00:10:47.872 fused_ordering(332) 00:10:47.872 fused_ordering(333) 00:10:47.872 fused_ordering(334) 00:10:47.872 fused_ordering(335) 00:10:47.872 fused_ordering(336) 00:10:47.872 fused_ordering(337) 00:10:47.872 fused_ordering(338) 00:10:47.872 fused_ordering(339) 00:10:47.872 fused_ordering(340) 00:10:47.872 fused_ordering(341) 00:10:47.872 fused_ordering(342) 00:10:47.872 fused_ordering(343) 00:10:47.872 fused_ordering(344) 00:10:47.872 fused_ordering(345) 00:10:47.872 fused_ordering(346) 00:10:47.872 fused_ordering(347) 00:10:47.872 fused_ordering(348) 00:10:47.872 fused_ordering(349) 00:10:47.872 fused_ordering(350) 00:10:47.872 fused_ordering(351) 00:10:47.872 fused_ordering(352) 00:10:47.872 fused_ordering(353) 00:10:47.872 fused_ordering(354) 00:10:47.872 fused_ordering(355) 00:10:47.872 fused_ordering(356) 00:10:47.872 fused_ordering(357) 00:10:47.872 fused_ordering(358) 00:10:47.872 fused_ordering(359) 00:10:47.872 fused_ordering(360) 00:10:47.872 fused_ordering(361) 00:10:47.872 fused_ordering(362) 00:10:47.872 fused_ordering(363) 00:10:47.872 fused_ordering(364) 00:10:47.872 fused_ordering(365) 00:10:47.872 fused_ordering(366) 00:10:47.872 fused_ordering(367) 00:10:47.872 fused_ordering(368) 00:10:47.872 fused_ordering(369) 00:10:47.872 fused_ordering(370) 00:10:47.872 fused_ordering(371) 00:10:47.872 fused_ordering(372) 00:10:47.872 fused_ordering(373) 00:10:47.872 fused_ordering(374) 00:10:47.872 fused_ordering(375) 00:10:47.872 fused_ordering(376) 00:10:47.872 fused_ordering(377) 00:10:47.872 fused_ordering(378) 00:10:47.872 fused_ordering(379) 00:10:47.872 fused_ordering(380) 00:10:47.872 fused_ordering(381) 00:10:47.872 fused_ordering(382) 00:10:47.872 fused_ordering(383) 00:10:47.872 fused_ordering(384) 00:10:47.872 fused_ordering(385) 00:10:47.872 fused_ordering(386) 00:10:47.872 fused_ordering(387) 00:10:47.872 fused_ordering(388) 00:10:47.872 fused_ordering(389) 00:10:47.872 fused_ordering(390) 00:10:47.872 fused_ordering(391) 00:10:47.872 fused_ordering(392) 00:10:47.872 fused_ordering(393) 00:10:47.872 fused_ordering(394) 00:10:47.872 fused_ordering(395) 00:10:47.872 fused_ordering(396) 00:10:47.872 fused_ordering(397) 00:10:47.872 fused_ordering(398) 00:10:47.872 fused_ordering(399) 00:10:47.872 fused_ordering(400) 00:10:47.872 fused_ordering(401) 00:10:47.872 fused_ordering(402) 00:10:47.872 fused_ordering(403) 00:10:47.872 fused_ordering(404) 00:10:47.872 fused_ordering(405) 00:10:47.872 fused_ordering(406) 00:10:47.872 fused_ordering(407) 00:10:47.872 fused_ordering(408) 00:10:47.872 fused_ordering(409) 00:10:47.872 fused_ordering(410) 00:10:48.131 fused_ordering(411) 00:10:48.131 fused_ordering(412) 00:10:48.131 fused_ordering(413) 00:10:48.131 fused_ordering(414) 00:10:48.131 fused_ordering(415) 00:10:48.131 fused_ordering(416) 00:10:48.131 fused_ordering(417) 00:10:48.131 fused_ordering(418) 00:10:48.131 fused_ordering(419) 00:10:48.131 fused_ordering(420) 00:10:48.131 fused_ordering(421) 00:10:48.131 fused_ordering(422) 00:10:48.131 fused_ordering(423) 00:10:48.131 fused_ordering(424) 00:10:48.131 fused_ordering(425) 00:10:48.131 fused_ordering(426) 00:10:48.131 fused_ordering(427) 00:10:48.131 fused_ordering(428) 00:10:48.131 fused_ordering(429) 00:10:48.131 fused_ordering(430) 00:10:48.131 fused_ordering(431) 00:10:48.131 fused_ordering(432) 00:10:48.131 fused_ordering(433) 00:10:48.131 fused_ordering(434) 00:10:48.131 fused_ordering(435) 00:10:48.131 fused_ordering(436) 00:10:48.131 fused_ordering(437) 00:10:48.131 fused_ordering(438) 00:10:48.131 fused_ordering(439) 00:10:48.131 fused_ordering(440) 00:10:48.131 fused_ordering(441) 00:10:48.131 fused_ordering(442) 00:10:48.131 fused_ordering(443) 00:10:48.131 fused_ordering(444) 00:10:48.131 fused_ordering(445) 00:10:48.131 fused_ordering(446) 00:10:48.131 fused_ordering(447) 00:10:48.131 fused_ordering(448) 00:10:48.131 fused_ordering(449) 00:10:48.131 fused_ordering(450) 00:10:48.131 fused_ordering(451) 00:10:48.131 fused_ordering(452) 00:10:48.131 fused_ordering(453) 00:10:48.131 fused_ordering(454) 00:10:48.131 fused_ordering(455) 00:10:48.131 fused_ordering(456) 00:10:48.131 fused_ordering(457) 00:10:48.131 fused_ordering(458) 00:10:48.131 fused_ordering(459) 00:10:48.131 fused_ordering(460) 00:10:48.131 fused_ordering(461) 00:10:48.131 fused_ordering(462) 00:10:48.131 fused_ordering(463) 00:10:48.131 fused_ordering(464) 00:10:48.131 fused_ordering(465) 00:10:48.131 fused_ordering(466) 00:10:48.131 fused_ordering(467) 00:10:48.131 fused_ordering(468) 00:10:48.131 fused_ordering(469) 00:10:48.131 fused_ordering(470) 00:10:48.131 fused_ordering(471) 00:10:48.131 fused_ordering(472) 00:10:48.131 fused_ordering(473) 00:10:48.131 fused_ordering(474) 00:10:48.131 fused_ordering(475) 00:10:48.131 fused_ordering(476) 00:10:48.131 fused_ordering(477) 00:10:48.131 fused_ordering(478) 00:10:48.131 fused_ordering(479) 00:10:48.131 fused_ordering(480) 00:10:48.131 fused_ordering(481) 00:10:48.131 fused_ordering(482) 00:10:48.131 fused_ordering(483) 00:10:48.131 fused_ordering(484) 00:10:48.131 fused_ordering(485) 00:10:48.131 fused_ordering(486) 00:10:48.131 fused_ordering(487) 00:10:48.131 fused_ordering(488) 00:10:48.131 fused_ordering(489) 00:10:48.131 fused_ordering(490) 00:10:48.131 fused_ordering(491) 00:10:48.131 fused_ordering(492) 00:10:48.131 fused_ordering(493) 00:10:48.131 fused_ordering(494) 00:10:48.131 fused_ordering(495) 00:10:48.131 fused_ordering(496) 00:10:48.131 fused_ordering(497) 00:10:48.131 fused_ordering(498) 00:10:48.131 fused_ordering(499) 00:10:48.131 fused_ordering(500) 00:10:48.131 fused_ordering(501) 00:10:48.131 fused_ordering(502) 00:10:48.131 fused_ordering(503) 00:10:48.131 fused_ordering(504) 00:10:48.131 fused_ordering(505) 00:10:48.131 fused_ordering(506) 00:10:48.131 fused_ordering(507) 00:10:48.131 fused_ordering(508) 00:10:48.131 fused_ordering(509) 00:10:48.131 fused_ordering(510) 00:10:48.131 fused_ordering(511) 00:10:48.131 fused_ordering(512) 00:10:48.131 fused_ordering(513) 00:10:48.131 fused_ordering(514) 00:10:48.131 fused_ordering(515) 00:10:48.131 fused_ordering(516) 00:10:48.131 fused_ordering(517) 00:10:48.131 fused_ordering(518) 00:10:48.131 fused_ordering(519) 00:10:48.131 fused_ordering(520) 00:10:48.131 fused_ordering(521) 00:10:48.131 fused_ordering(522) 00:10:48.131 fused_ordering(523) 00:10:48.131 fused_ordering(524) 00:10:48.131 fused_ordering(525) 00:10:48.131 fused_ordering(526) 00:10:48.131 fused_ordering(527) 00:10:48.131 fused_ordering(528) 00:10:48.131 fused_ordering(529) 00:10:48.131 fused_ordering(530) 00:10:48.131 fused_ordering(531) 00:10:48.131 fused_ordering(532) 00:10:48.131 fused_ordering(533) 00:10:48.131 fused_ordering(534) 00:10:48.131 fused_ordering(535) 00:10:48.131 fused_ordering(536) 00:10:48.131 fused_ordering(537) 00:10:48.131 fused_ordering(538) 00:10:48.131 fused_ordering(539) 00:10:48.131 fused_ordering(540) 00:10:48.131 fused_ordering(541) 00:10:48.131 fused_ordering(542) 00:10:48.131 fused_ordering(543) 00:10:48.131 fused_ordering(544) 00:10:48.131 fused_ordering(545) 00:10:48.131 fused_ordering(546) 00:10:48.131 fused_ordering(547) 00:10:48.131 fused_ordering(548) 00:10:48.131 fused_ordering(549) 00:10:48.131 fused_ordering(550) 00:10:48.131 fused_ordering(551) 00:10:48.131 fused_ordering(552) 00:10:48.131 fused_ordering(553) 00:10:48.131 fused_ordering(554) 00:10:48.131 fused_ordering(555) 00:10:48.131 fused_ordering(556) 00:10:48.131 fused_ordering(557) 00:10:48.131 fused_ordering(558) 00:10:48.131 fused_ordering(559) 00:10:48.131 fused_ordering(560) 00:10:48.131 fused_ordering(561) 00:10:48.131 fused_ordering(562) 00:10:48.131 fused_ordering(563) 00:10:48.131 fused_ordering(564) 00:10:48.131 fused_ordering(565) 00:10:48.131 fused_ordering(566) 00:10:48.131 fused_ordering(567) 00:10:48.131 fused_ordering(568) 00:10:48.131 fused_ordering(569) 00:10:48.131 fused_ordering(570) 00:10:48.131 fused_ordering(571) 00:10:48.131 fused_ordering(572) 00:10:48.131 fused_ordering(573) 00:10:48.131 fused_ordering(574) 00:10:48.131 fused_ordering(575) 00:10:48.131 fused_ordering(576) 00:10:48.131 fused_ordering(577) 00:10:48.131 fused_ordering(578) 00:10:48.131 fused_ordering(579) 00:10:48.131 fused_ordering(580) 00:10:48.131 fused_ordering(581) 00:10:48.131 fused_ordering(582) 00:10:48.131 fused_ordering(583) 00:10:48.131 fused_ordering(584) 00:10:48.131 fused_ordering(585) 00:10:48.131 fused_ordering(586) 00:10:48.131 fused_ordering(587) 00:10:48.131 fused_ordering(588) 00:10:48.131 fused_ordering(589) 00:10:48.131 fused_ordering(590) 00:10:48.131 fused_ordering(591) 00:10:48.131 fused_ordering(592) 00:10:48.131 fused_ordering(593) 00:10:48.131 fused_ordering(594) 00:10:48.131 fused_ordering(595) 00:10:48.131 fused_ordering(596) 00:10:48.131 fused_ordering(597) 00:10:48.131 fused_ordering(598) 00:10:48.131 fused_ordering(599) 00:10:48.131 fused_ordering(600) 00:10:48.131 fused_ordering(601) 00:10:48.131 fused_ordering(602) 00:10:48.131 fused_ordering(603) 00:10:48.131 fused_ordering(604) 00:10:48.131 fused_ordering(605) 00:10:48.131 fused_ordering(606) 00:10:48.131 fused_ordering(607) 00:10:48.131 fused_ordering(608) 00:10:48.131 fused_ordering(609) 00:10:48.131 fused_ordering(610) 00:10:48.131 fused_ordering(611) 00:10:48.131 fused_ordering(612) 00:10:48.131 fused_ordering(613) 00:10:48.131 fused_ordering(614) 00:10:48.131 fused_ordering(615) 00:10:48.699 fused_ordering(616) 00:10:48.699 fused_ordering(617) 00:10:48.699 fused_ordering(618) 00:10:48.699 fused_ordering(619) 00:10:48.699 fused_ordering(620) 00:10:48.699 fused_ordering(621) 00:10:48.699 fused_ordering(622) 00:10:48.699 fused_ordering(623) 00:10:48.699 fused_ordering(624) 00:10:48.699 fused_ordering(625) 00:10:48.699 fused_ordering(626) 00:10:48.699 fused_ordering(627) 00:10:48.699 fused_ordering(628) 00:10:48.699 fused_ordering(629) 00:10:48.699 fused_ordering(630) 00:10:48.699 fused_ordering(631) 00:10:48.699 fused_ordering(632) 00:10:48.699 fused_ordering(633) 00:10:48.699 fused_ordering(634) 00:10:48.699 fused_ordering(635) 00:10:48.699 fused_ordering(636) 00:10:48.699 fused_ordering(637) 00:10:48.699 fused_ordering(638) 00:10:48.699 fused_ordering(639) 00:10:48.699 fused_ordering(640) 00:10:48.699 fused_ordering(641) 00:10:48.699 fused_ordering(642) 00:10:48.699 fused_ordering(643) 00:10:48.699 fused_ordering(644) 00:10:48.699 fused_ordering(645) 00:10:48.699 fused_ordering(646) 00:10:48.699 fused_ordering(647) 00:10:48.699 fused_ordering(648) 00:10:48.699 fused_ordering(649) 00:10:48.699 fused_ordering(650) 00:10:48.699 fused_ordering(651) 00:10:48.699 fused_ordering(652) 00:10:48.699 fused_ordering(653) 00:10:48.699 fused_ordering(654) 00:10:48.699 fused_ordering(655) 00:10:48.699 fused_ordering(656) 00:10:48.699 fused_ordering(657) 00:10:48.699 fused_ordering(658) 00:10:48.699 fused_ordering(659) 00:10:48.699 fused_ordering(660) 00:10:48.699 fused_ordering(661) 00:10:48.699 fused_ordering(662) 00:10:48.699 fused_ordering(663) 00:10:48.699 fused_ordering(664) 00:10:48.699 fused_ordering(665) 00:10:48.699 fused_ordering(666) 00:10:48.699 fused_ordering(667) 00:10:48.699 fused_ordering(668) 00:10:48.699 fused_ordering(669) 00:10:48.699 fused_ordering(670) 00:10:48.699 fused_ordering(671) 00:10:48.699 fused_ordering(672) 00:10:48.699 fused_ordering(673) 00:10:48.699 fused_ordering(674) 00:10:48.699 fused_ordering(675) 00:10:48.699 fused_ordering(676) 00:10:48.699 fused_ordering(677) 00:10:48.699 fused_ordering(678) 00:10:48.699 fused_ordering(679) 00:10:48.699 fused_ordering(680) 00:10:48.699 fused_ordering(681) 00:10:48.699 fused_ordering(682) 00:10:48.699 fused_ordering(683) 00:10:48.699 fused_ordering(684) 00:10:48.699 fused_ordering(685) 00:10:48.699 fused_ordering(686) 00:10:48.699 fused_ordering(687) 00:10:48.699 fused_ordering(688) 00:10:48.699 fused_ordering(689) 00:10:48.699 fused_ordering(690) 00:10:48.699 fused_ordering(691) 00:10:48.699 fused_ordering(692) 00:10:48.699 fused_ordering(693) 00:10:48.699 fused_ordering(694) 00:10:48.700 fused_ordering(695) 00:10:48.700 fused_ordering(696) 00:10:48.700 fused_ordering(697) 00:10:48.700 fused_ordering(698) 00:10:48.700 fused_ordering(699) 00:10:48.700 fused_ordering(700) 00:10:48.700 fused_ordering(701) 00:10:48.700 fused_ordering(702) 00:10:48.700 fused_ordering(703) 00:10:48.700 fused_ordering(704) 00:10:48.700 fused_ordering(705) 00:10:48.700 fused_ordering(706) 00:10:48.700 fused_ordering(707) 00:10:48.700 fused_ordering(708) 00:10:48.700 fused_ordering(709) 00:10:48.700 fused_ordering(710) 00:10:48.700 fused_ordering(711) 00:10:48.700 fused_ordering(712) 00:10:48.700 fused_ordering(713) 00:10:48.700 fused_ordering(714) 00:10:48.700 fused_ordering(715) 00:10:48.700 fused_ordering(716) 00:10:48.700 fused_ordering(717) 00:10:48.700 fused_ordering(718) 00:10:48.700 fused_ordering(719) 00:10:48.700 fused_ordering(720) 00:10:48.700 fused_ordering(721) 00:10:48.700 fused_ordering(722) 00:10:48.700 fused_ordering(723) 00:10:48.700 fused_ordering(724) 00:10:48.700 fused_ordering(725) 00:10:48.700 fused_ordering(726) 00:10:48.700 fused_ordering(727) 00:10:48.700 fused_ordering(728) 00:10:48.700 fused_ordering(729) 00:10:48.700 fused_ordering(730) 00:10:48.700 fused_ordering(731) 00:10:48.700 fused_ordering(732) 00:10:48.700 fused_ordering(733) 00:10:48.700 fused_ordering(734) 00:10:48.700 fused_ordering(735) 00:10:48.700 fused_ordering(736) 00:10:48.700 fused_ordering(737) 00:10:48.700 fused_ordering(738) 00:10:48.700 fused_ordering(739) 00:10:48.700 fused_ordering(740) 00:10:48.700 fused_ordering(741) 00:10:48.700 fused_ordering(742) 00:10:48.700 fused_ordering(743) 00:10:48.700 fused_ordering(744) 00:10:48.700 fused_ordering(745) 00:10:48.700 fused_ordering(746) 00:10:48.700 fused_ordering(747) 00:10:48.700 fused_ordering(748) 00:10:48.700 fused_ordering(749) 00:10:48.700 fused_ordering(750) 00:10:48.700 fused_ordering(751) 00:10:48.700 fused_ordering(752) 00:10:48.700 fused_ordering(753) 00:10:48.700 fused_ordering(754) 00:10:48.700 fused_ordering(755) 00:10:48.700 fused_ordering(756) 00:10:48.700 fused_ordering(757) 00:10:48.700 fused_ordering(758) 00:10:48.700 fused_ordering(759) 00:10:48.700 fused_ordering(760) 00:10:48.700 fused_ordering(761) 00:10:48.700 fused_ordering(762) 00:10:48.700 fused_ordering(763) 00:10:48.700 fused_ordering(764) 00:10:48.700 fused_ordering(765) 00:10:48.700 fused_ordering(766) 00:10:48.700 fused_ordering(767) 00:10:48.700 fused_ordering(768) 00:10:48.700 fused_ordering(769) 00:10:48.700 fused_ordering(770) 00:10:48.700 fused_ordering(771) 00:10:48.700 fused_ordering(772) 00:10:48.700 fused_ordering(773) 00:10:48.700 fused_ordering(774) 00:10:48.700 fused_ordering(775) 00:10:48.700 fused_ordering(776) 00:10:48.700 fused_ordering(777) 00:10:48.700 fused_ordering(778) 00:10:48.700 fused_ordering(779) 00:10:48.700 fused_ordering(780) 00:10:48.700 fused_ordering(781) 00:10:48.700 fused_ordering(782) 00:10:48.700 fused_ordering(783) 00:10:48.700 fused_ordering(784) 00:10:48.700 fused_ordering(785) 00:10:48.700 fused_ordering(786) 00:10:48.700 fused_ordering(787) 00:10:48.700 fused_ordering(788) 00:10:48.700 fused_ordering(789) 00:10:48.700 fused_ordering(790) 00:10:48.700 fused_ordering(791) 00:10:48.700 fused_ordering(792) 00:10:48.700 fused_ordering(793) 00:10:48.700 fused_ordering(794) 00:10:48.700 fused_ordering(795) 00:10:48.700 fused_ordering(796) 00:10:48.700 fused_ordering(797) 00:10:48.700 fused_ordering(798) 00:10:48.700 fused_ordering(799) 00:10:48.700 fused_ordering(800) 00:10:48.700 fused_ordering(801) 00:10:48.700 fused_ordering(802) 00:10:48.700 fused_ordering(803) 00:10:48.700 fused_ordering(804) 00:10:48.700 fused_ordering(805) 00:10:48.700 fused_ordering(806) 00:10:48.700 fused_ordering(807) 00:10:48.700 fused_ordering(808) 00:10:48.700 fused_ordering(809) 00:10:48.700 fused_ordering(810) 00:10:48.700 fused_ordering(811) 00:10:48.700 fused_ordering(812) 00:10:48.700 fused_ordering(813) 00:10:48.700 fused_ordering(814) 00:10:48.700 fused_ordering(815) 00:10:48.700 fused_ordering(816) 00:10:48.700 fused_ordering(817) 00:10:48.700 fused_ordering(818) 00:10:48.700 fused_ordering(819) 00:10:48.700 fused_ordering(820) 00:10:48.959 fused_ordering(821) 00:10:48.959 fused_ordering(822) 00:10:48.959 fused_ordering(823) 00:10:48.959 fused_ordering(824) 00:10:48.959 fused_ordering(825) 00:10:48.959 fused_ordering(826) 00:10:48.959 fused_ordering(827) 00:10:48.959 fused_ordering(828) 00:10:48.959 fused_ordering(829) 00:10:48.959 fused_ordering(830) 00:10:48.959 fused_ordering(831) 00:10:48.959 fused_ordering(832) 00:10:48.959 fused_ordering(833) 00:10:48.959 fused_ordering(834) 00:10:48.959 fused_ordering(835) 00:10:48.959 fused_ordering(836) 00:10:48.959 fused_ordering(837) 00:10:48.959 fused_ordering(838) 00:10:48.959 fused_ordering(839) 00:10:48.959 fused_ordering(840) 00:10:48.959 fused_ordering(841) 00:10:48.959 fused_ordering(842) 00:10:48.959 fused_ordering(843) 00:10:48.959 fused_ordering(844) 00:10:48.959 fused_ordering(845) 00:10:48.959 fused_ordering(846) 00:10:48.959 fused_ordering(847) 00:10:48.959 fused_ordering(848) 00:10:48.959 fused_ordering(849) 00:10:48.959 fused_ordering(850) 00:10:48.959 fused_ordering(851) 00:10:48.959 fused_ordering(852) 00:10:48.959 fused_ordering(853) 00:10:48.959 fused_ordering(854) 00:10:48.959 fused_ordering(855) 00:10:48.959 fused_ordering(856) 00:10:48.959 fused_ordering(857) 00:10:48.959 fused_ordering(858) 00:10:48.959 fused_ordering(859) 00:10:48.959 fused_ordering(860) 00:10:48.959 fused_ordering(861) 00:10:48.959 fused_ordering(862) 00:10:48.959 fused_ordering(863) 00:10:48.959 fused_ordering(864) 00:10:48.959 fused_ordering(865) 00:10:48.959 fused_ordering(866) 00:10:48.959 fused_ordering(867) 00:10:48.959 fused_ordering(868) 00:10:48.959 fused_ordering(869) 00:10:48.959 fused_ordering(870) 00:10:48.959 fused_ordering(871) 00:10:48.959 fused_ordering(872) 00:10:48.959 fused_ordering(873) 00:10:48.959 fused_ordering(874) 00:10:48.959 fused_ordering(875) 00:10:48.959 fused_ordering(876) 00:10:48.959 fused_ordering(877) 00:10:48.959 fused_ordering(878) 00:10:48.959 fused_ordering(879) 00:10:48.959 fused_ordering(880) 00:10:48.959 fused_ordering(881) 00:10:48.959 fused_ordering(882) 00:10:48.959 fused_ordering(883) 00:10:48.959 fused_ordering(884) 00:10:48.959 fused_ordering(885) 00:10:48.959 fused_ordering(886) 00:10:48.959 fused_ordering(887) 00:10:48.959 fused_ordering(888) 00:10:48.959 fused_ordering(889) 00:10:48.959 fused_ordering(890) 00:10:48.959 fused_ordering(891) 00:10:48.959 fused_ordering(892) 00:10:48.959 fused_ordering(893) 00:10:48.959 fused_ordering(894) 00:10:48.959 fused_ordering(895) 00:10:48.959 fused_ordering(896) 00:10:48.959 fused_ordering(897) 00:10:48.959 fused_ordering(898) 00:10:48.959 fused_ordering(899) 00:10:48.959 fused_ordering(900) 00:10:48.959 fused_ordering(901) 00:10:48.959 fused_ordering(902) 00:10:48.959 fused_ordering(903) 00:10:48.959 fused_ordering(904) 00:10:48.959 fused_ordering(905) 00:10:48.959 fused_ordering(906) 00:10:48.959 fused_ordering(907) 00:10:48.959 fused_ordering(908) 00:10:48.959 fused_ordering(909) 00:10:48.959 fused_ordering(910) 00:10:48.959 fused_ordering(911) 00:10:48.959 fused_ordering(912) 00:10:48.959 fused_ordering(913) 00:10:48.959 fused_ordering(914) 00:10:48.959 fused_ordering(915) 00:10:48.959 fused_ordering(916) 00:10:48.959 fused_ordering(917) 00:10:48.959 fused_ordering(918) 00:10:48.959 fused_ordering(919) 00:10:48.959 fused_ordering(920) 00:10:48.959 fused_ordering(921) 00:10:48.959 fused_ordering(922) 00:10:48.959 fused_ordering(923) 00:10:48.959 fused_ordering(924) 00:10:48.960 fused_ordering(925) 00:10:48.960 fused_ordering(926) 00:10:48.960 fused_ordering(927) 00:10:48.960 fused_ordering(928) 00:10:48.960 fused_ordering(929) 00:10:48.960 fused_ordering(930) 00:10:48.960 fused_ordering(931) 00:10:48.960 fused_ordering(932) 00:10:48.960 fused_ordering(933) 00:10:48.960 fused_ordering(934) 00:10:48.960 fused_ordering(935) 00:10:48.960 fused_ordering(936) 00:10:48.960 fused_ordering(937) 00:10:48.960 fused_ordering(938) 00:10:48.960 fused_ordering(939) 00:10:48.960 fused_ordering(940) 00:10:48.960 fused_ordering(941) 00:10:48.960 fused_ordering(942) 00:10:48.960 fused_ordering(943) 00:10:48.960 fused_ordering(944) 00:10:48.960 fused_ordering(945) 00:10:48.960 fused_ordering(946) 00:10:48.960 fused_ordering(947) 00:10:48.960 fused_ordering(948) 00:10:48.960 fused_ordering(949) 00:10:48.960 fused_ordering(950) 00:10:48.960 fused_ordering(951) 00:10:48.960 fused_ordering(952) 00:10:48.960 fused_ordering(953) 00:10:48.960 fused_ordering(954) 00:10:48.960 fused_ordering(955) 00:10:48.960 fused_ordering(956) 00:10:48.960 fused_ordering(957) 00:10:48.960 fused_ordering(958) 00:10:48.960 fused_ordering(959) 00:10:48.960 fused_ordering(960) 00:10:48.960 fused_ordering(961) 00:10:48.960 fused_ordering(962) 00:10:48.960 fused_ordering(963) 00:10:48.960 fused_ordering(964) 00:10:48.960 fused_ordering(965) 00:10:48.960 fused_ordering(966) 00:10:48.960 fused_ordering(967) 00:10:48.960 fused_ordering(968) 00:10:48.960 fused_ordering(969) 00:10:48.960 fused_ordering(970) 00:10:48.960 fused_ordering(971) 00:10:48.960 fused_ordering(972) 00:10:48.960 fused_ordering(973) 00:10:48.960 fused_ordering(974) 00:10:48.960 fused_ordering(975) 00:10:48.960 fused_ordering(976) 00:10:48.960 fused_ordering(977) 00:10:48.960 fused_ordering(978) 00:10:48.960 fused_ordering(979) 00:10:48.960 fused_ordering(980) 00:10:48.960 fused_ordering(981) 00:10:48.960 fused_ordering(982) 00:10:48.960 fused_ordering(983) 00:10:48.960 fused_ordering(984) 00:10:48.960 fused_ordering(985) 00:10:48.960 fused_ordering(986) 00:10:48.960 fused_ordering(987) 00:10:48.960 fused_ordering(988) 00:10:48.960 fused_ordering(989) 00:10:48.960 fused_ordering(990) 00:10:48.960 fused_ordering(991) 00:10:48.960 fused_ordering(992) 00:10:48.960 fused_ordering(993) 00:10:48.960 fused_ordering(994) 00:10:48.960 fused_ordering(995) 00:10:48.960 fused_ordering(996) 00:10:48.960 fused_ordering(997) 00:10:48.960 fused_ordering(998) 00:10:48.960 fused_ordering(999) 00:10:48.960 fused_ordering(1000) 00:10:48.960 fused_ordering(1001) 00:10:48.960 fused_ordering(1002) 00:10:48.960 fused_ordering(1003) 00:10:48.960 fused_ordering(1004) 00:10:48.960 fused_ordering(1005) 00:10:48.960 fused_ordering(1006) 00:10:48.960 fused_ordering(1007) 00:10:48.960 fused_ordering(1008) 00:10:48.960 fused_ordering(1009) 00:10:48.960 fused_ordering(1010) 00:10:48.960 fused_ordering(1011) 00:10:48.960 fused_ordering(1012) 00:10:48.960 fused_ordering(1013) 00:10:48.960 fused_ordering(1014) 00:10:48.960 fused_ordering(1015) 00:10:48.960 fused_ordering(1016) 00:10:48.960 fused_ordering(1017) 00:10:48.960 fused_ordering(1018) 00:10:48.960 fused_ordering(1019) 00:10:48.960 fused_ordering(1020) 00:10:48.960 fused_ordering(1021) 00:10:48.960 fused_ordering(1022) 00:10:48.960 fused_ordering(1023) 00:10:48.960 07:46:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:48.960 07:46:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:48.960 07:46:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:48.960 07:46:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:10:48.960 07:46:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:48.960 07:46:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:10:48.960 07:46:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:48.960 07:46:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:48.960 rmmod nvme_tcp 00:10:49.219 rmmod nvme_fabrics 00:10:49.219 rmmod nvme_keyring 00:10:49.219 07:46:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:49.219 07:46:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:10:49.219 07:46:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:10:49.219 07:46:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3158024 ']' 00:10:49.219 07:46:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3158024 00:10:49.219 07:46:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 3158024 ']' 00:10:49.219 07:46:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 3158024 00:10:49.219 07:46:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:10:49.219 07:46:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:49.219 07:46:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3158024 00:10:49.219 07:46:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:49.219 07:46:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:49.219 07:46:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3158024' 00:10:49.219 killing process with pid 3158024 00:10:49.219 07:46:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 3158024 00:10:49.219 07:46:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 3158024 00:10:49.479 07:46:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:49.479 07:46:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:49.479 07:46:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:49.479 07:46:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:49.479 07:46:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:49.479 07:46:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.479 07:46:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:49.479 07:46:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.386 07:46:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:51.386 00:10:51.386 real 0m11.010s 00:10:51.386 user 0m5.456s 00:10:51.386 sys 0m5.799s 00:10:51.386 07:46:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:51.387 07:46:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:51.387 ************************************ 00:10:51.387 END TEST nvmf_fused_ordering 00:10:51.387 ************************************ 00:10:51.387 07:46:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:51.387 07:46:36 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:51.387 07:46:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:51.387 07:46:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:51.387 07:46:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:51.387 ************************************ 00:10:51.387 START TEST nvmf_delete_subsystem 00:10:51.387 ************************************ 00:10:51.387 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:51.646 * Looking for test storage... 00:10:51.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:51.646 07:46:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.228 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:58.228 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:58.228 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:58.228 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:58.228 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:58.228 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:58.228 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:58.228 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:58.228 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:58.228 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:10:58.228 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:58.228 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:10:58.228 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:58.228 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:10:58.228 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:58.228 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:58.228 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:58.228 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:58.228 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:58.228 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:58.229 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:58.229 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:58.229 Found net devices under 0000:86:00.0: cvl_0_0 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:58.229 Found net devices under 0000:86:00.1: cvl_0_1 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:58.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:10:58.229 00:10:58.229 --- 10.0.0.2 ping statistics --- 00:10:58.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.229 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:58.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:10:58.229 00:10:58.229 --- 10.0.0.1 ping statistics --- 00:10:58.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.229 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3162020 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3162020 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 3162020 ']' 00:10:58.229 07:46:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.229 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:58.229 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.229 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:58.229 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.229 [2024-07-15 07:46:42.047859] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:10:58.229 [2024-07-15 07:46:42.047906] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.229 EAL: No free 2048 kB hugepages reported on node 1 00:10:58.229 [2024-07-15 07:46:42.121162] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:58.229 [2024-07-15 07:46:42.199880] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:58.229 [2024-07-15 07:46:42.199915] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:58.229 [2024-07-15 07:46:42.199921] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:58.229 [2024-07-15 07:46:42.199928] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:58.229 [2024-07-15 07:46:42.199933] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:58.229 [2024-07-15 07:46:42.199991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.229 [2024-07-15 07:46:42.199990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.229 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:58.229 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:10:58.229 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:58.229 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:58.229 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.230 [2024-07-15 07:46:42.903082] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.230 [2024-07-15 07:46:42.923216] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.230 NULL1 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.230 Delay0 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3162084 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:58.230 07:46:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:58.489 EAL: No free 2048 kB hugepages reported on node 1 00:10:58.489 [2024-07-15 07:46:43.014078] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:00.391 07:46:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:00.391 07:46:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.391 07:46:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:00.648 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 [2024-07-15 07:46:45.167949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175a5c0 is same with the state(5) to be set 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 starting I/O failed: -6 00:11:00.649 Read completed with error (sct=0, sc=8) 00:11:00.649 Write completed with error (sct=0, sc=8) 00:11:00.650 starting I/O failed: -6 00:11:00.650 Read completed with error (sct=0, sc=8) 00:11:00.650 Read completed with error (sct=0, sc=8) 00:11:00.650 starting I/O failed: -6 00:11:00.650 Read completed with error (sct=0, sc=8) 00:11:00.650 Read completed with error (sct=0, sc=8) 00:11:00.650 starting I/O failed: -6 00:11:00.650 Read completed with error (sct=0, sc=8) 00:11:00.650 Read completed with error (sct=0, sc=8) 00:11:00.650 starting I/O failed: -6 00:11:00.650 Read completed with error (sct=0, sc=8) 00:11:00.650 Read completed with error (sct=0, sc=8) 00:11:00.650 starting I/O failed: -6 00:11:00.650 Read completed with error (sct=0, sc=8) 00:11:00.650 Read completed with error (sct=0, sc=8) 00:11:00.650 starting I/O failed: -6 00:11:00.650 Read completed with error (sct=0, sc=8) 00:11:00.650 Read completed with error (sct=0, sc=8) 00:11:00.650 starting I/O failed: -6 00:11:00.650 Write completed with error (sct=0, sc=8) 00:11:00.650 Read completed with error (sct=0, sc=8) 00:11:00.650 starting I/O failed: -6 00:11:00.650 Read completed with error (sct=0, sc=8) 00:11:00.650 Read completed with error (sct=0, sc=8) 00:11:00.650 starting I/O failed: -6 00:11:00.650 starting I/O failed: -6 00:11:00.650 starting I/O failed: -6 00:11:00.650 starting I/O failed: -6 00:11:00.650 starting I/O failed: -6 00:11:00.650 starting I/O failed: -6 00:11:00.650 starting I/O failed: -6 00:11:00.650 starting I/O failed: -6 00:11:01.586 [2024-07-15 07:46:46.148960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175bac0 is same with the state(5) to be set 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 [2024-07-15 07:46:46.171315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175a3e0 is same with the state(5) to be set 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 [2024-07-15 07:46:46.171548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175a7a0 is same with the state(5) to be set 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 [2024-07-15 07:46:46.174498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd20400d760 is same with the state(5) to be set 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Write completed with error (sct=0, sc=8) 00:11:01.586 Read completed with error (sct=0, sc=8) 00:11:01.587 Read completed with error (sct=0, sc=8) 00:11:01.587 Read completed with error (sct=0, sc=8) 00:11:01.587 Read completed with error (sct=0, sc=8) 00:11:01.587 Read completed with error (sct=0, sc=8) 00:11:01.587 Read completed with error (sct=0, sc=8) 00:11:01.587 Read completed with error (sct=0, sc=8) 00:11:01.587 Read completed with error (sct=0, sc=8) 00:11:01.587 Read completed with error (sct=0, sc=8) 00:11:01.587 Write completed with error (sct=0, sc=8) 00:11:01.587 Write completed with error (sct=0, sc=8) 00:11:01.587 Read completed with error (sct=0, sc=8) 00:11:01.587 Read completed with error (sct=0, sc=8) 00:11:01.587 Read completed with error (sct=0, sc=8) 00:11:01.587 Write completed with error (sct=0, sc=8) 00:11:01.587 Read completed with error (sct=0, sc=8) 00:11:01.587 Read completed with error (sct=0, sc=8) 00:11:01.587 Write completed with error (sct=0, sc=8) 00:11:01.587 Write completed with error (sct=0, sc=8) 00:11:01.587 Write completed with error (sct=0, sc=8) 00:11:01.587 Read completed with error (sct=0, sc=8) 00:11:01.587 Read completed with error (sct=0, sc=8) 00:11:01.587 Read completed with error (sct=0, sc=8) 00:11:01.587 Read completed with error (sct=0, sc=8) 00:11:01.587 Write completed with error (sct=0, sc=8) 00:11:01.587 Read completed with error (sct=0, sc=8) 00:11:01.587 Read completed with error (sct=0, sc=8) 00:11:01.587 [2024-07-15 07:46:46.175129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd20400cfe0 is same with the state(5) to be set 00:11:01.587 Initializing NVMe Controllers 00:11:01.587 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:01.587 Controller IO queue size 128, less than required. 00:11:01.587 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:01.587 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:01.587 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:01.587 Initialization complete. Launching workers. 00:11:01.587 ======================================================== 00:11:01.587 Latency(us) 00:11:01.587 Device Information : IOPS MiB/s Average min max 00:11:01.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 161.41 0.08 913419.09 255.09 1005613.79 00:11:01.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 184.83 0.09 914751.28 377.75 2001420.73 00:11:01.587 ======================================================== 00:11:01.587 Total : 346.24 0.17 914130.23 255.09 2001420.73 00:11:01.587 00:11:01.587 [2024-07-15 07:46:46.175574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175bac0 (9): Bad file descriptor 00:11:01.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:01.587 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.587 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:01.587 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3162084 00:11:01.587 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:02.166 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:02.166 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3162084 00:11:02.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3162084) - No such process 00:11:02.166 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3162084 00:11:02.166 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:11:02.166 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3162084 00:11:02.166 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:11:02.166 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:02.166 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:11:02.166 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:02.166 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3162084 00:11:02.166 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:11:02.166 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:02.166 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:02.166 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:02.167 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:02.167 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.167 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.167 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.167 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.167 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.167 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.167 [2024-07-15 07:46:46.704544] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.167 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.167 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.167 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.167 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.167 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.167 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3162739 00:11:02.167 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:02.167 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:02.167 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3162739 00:11:02.167 07:46:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:02.167 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.167 [2024-07-15 07:46:46.785175] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:02.734 07:46:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:02.734 07:46:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3162739 00:11:02.734 07:46:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:02.994 07:46:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:02.994 07:46:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3162739 00:11:02.994 07:46:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:03.561 07:46:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:03.561 07:46:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3162739 00:11:03.561 07:46:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:04.129 07:46:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:04.129 07:46:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3162739 00:11:04.129 07:46:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:04.695 07:46:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:04.695 07:46:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3162739 00:11:04.695 07:46:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:05.262 07:46:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:05.262 07:46:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3162739 00:11:05.262 07:46:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:05.262 Initializing NVMe Controllers 00:11:05.262 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:05.262 Controller IO queue size 128, less than required. 00:11:05.262 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:05.262 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:05.262 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:05.262 Initialization complete. Launching workers. 00:11:05.262 ======================================================== 00:11:05.262 Latency(us) 00:11:05.262 Device Information : IOPS MiB/s Average min max 00:11:05.262 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002198.81 1000130.04 1041178.47 00:11:05.262 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003867.58 1000205.37 1010267.37 00:11:05.262 ======================================================== 00:11:05.262 Total : 256.00 0.12 1003033.19 1000130.04 1041178.47 00:11:05.262 00:11:05.522 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:05.522 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3162739 00:11:05.522 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3162739) - No such process 00:11:05.522 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3162739 00:11:05.522 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:05.522 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:05.522 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:05.522 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:05.522 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:05.522 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:05.522 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:05.522 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:05.522 rmmod nvme_tcp 00:11:05.781 rmmod nvme_fabrics 00:11:05.781 rmmod nvme_keyring 00:11:05.781 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:05.781 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:05.781 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:05.781 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3162020 ']' 00:11:05.781 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3162020 00:11:05.781 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 3162020 ']' 00:11:05.781 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 3162020 00:11:05.781 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:11:05.781 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:05.781 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3162020 00:11:05.781 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:05.781 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:05.781 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3162020' 00:11:05.781 killing process with pid 3162020 00:11:05.781 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 3162020 00:11:05.781 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 3162020 00:11:06.040 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:06.040 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:06.040 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:06.040 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:06.040 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:06.040 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.040 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:06.040 07:46:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.944 07:46:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:07.944 00:11:07.944 real 0m16.504s 00:11:07.944 user 0m30.493s 00:11:07.944 sys 0m5.256s 00:11:07.944 07:46:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:07.944 07:46:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:07.944 ************************************ 00:11:07.944 END TEST nvmf_delete_subsystem 00:11:07.944 ************************************ 00:11:07.944 07:46:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:07.944 07:46:52 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:07.944 07:46:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:07.944 07:46:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:07.944 07:46:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:07.944 ************************************ 00:11:07.944 START TEST nvmf_ns_masking 00:11:07.944 ************************************ 00:11:07.944 07:46:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:08.203 * Looking for test storage... 00:11:08.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=d02deb47-97fa-4452-a045-a692cb7fc012 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=2470a4e4-de1d-47b8-b4ed-3b6abf3c7779 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=d80c1f18-e58f-41b8-a65c-91df61974a48 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.203 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:08.204 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:08.204 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:08.204 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.204 07:46:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:08.204 07:46:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.204 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:08.204 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:08.204 07:46:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:08.204 07:46:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:13.594 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:13.594 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:13.594 Found net devices under 0000:86:00.0: cvl_0_0 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:13.594 Found net devices under 0000:86:00.1: cvl_0_1 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:13.594 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:13.854 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:13.854 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:13.854 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:13.854 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:13.854 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:13.854 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:13.854 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:13.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:13.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:11:13.854 00:11:13.854 --- 10.0.0.2 ping statistics --- 00:11:13.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.854 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:11:13.854 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:13.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:13.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:11:13.854 00:11:13.854 --- 10.0.0.1 ping statistics --- 00:11:13.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.854 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:11:13.854 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:13.854 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:13.854 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:13.854 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:13.854 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:13.854 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:13.854 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:13.854 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:13.854 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:14.116 07:46:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:14.116 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:14.116 07:46:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:14.116 07:46:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:14.116 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3166947 00:11:14.116 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3166947 00:11:14.116 07:46:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:14.116 07:46:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3166947 ']' 00:11:14.116 07:46:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.116 07:46:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:14.116 07:46:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.116 07:46:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:14.116 07:46:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:14.116 [2024-07-15 07:46:58.663856] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:11:14.116 [2024-07-15 07:46:58.663899] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.116 EAL: No free 2048 kB hugepages reported on node 1 00:11:14.116 [2024-07-15 07:46:58.735101] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.116 [2024-07-15 07:46:58.806281] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.116 [2024-07-15 07:46:58.806319] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.116 [2024-07-15 07:46:58.806326] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.116 [2024-07-15 07:46:58.806332] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.116 [2024-07-15 07:46:58.806337] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.116 [2024-07-15 07:46:58.806355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.054 07:46:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:15.054 07:46:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:15.054 07:46:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:15.054 07:46:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:15.054 07:46:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:15.054 07:46:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.054 07:46:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:15.054 [2024-07-15 07:46:59.668614] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.054 07:46:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:15.054 07:46:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:15.054 07:46:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:15.314 Malloc1 00:11:15.314 07:46:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:15.573 Malloc2 00:11:15.573 07:47:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:15.573 07:47:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:15.832 07:47:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.091 [2024-07-15 07:47:00.614112] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.091 07:47:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:16.091 07:47:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d80c1f18-e58f-41b8-a65c-91df61974a48 -a 10.0.0.2 -s 4420 -i 4 00:11:16.091 07:47:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:16.091 07:47:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:16.091 07:47:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.091 07:47:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:16.091 07:47:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:18.628 07:47:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:18.628 07:47:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:18.628 07:47:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:18.628 07:47:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:18.628 07:47:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:18.628 07:47:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:18.628 07:47:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:18.628 07:47:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:18.628 07:47:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:18.628 07:47:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:18.629 07:47:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:18.629 07:47:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:18.629 07:47:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:18.629 [ 0]:0x1 00:11:18.629 07:47:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:18.629 07:47:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:18.629 07:47:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f2e79f4d859a4bf7a7eecd7a8dba1e2a 00:11:18.629 07:47:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f2e79f4d859a4bf7a7eecd7a8dba1e2a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:18.629 07:47:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:18.629 07:47:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:18.629 07:47:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:18.629 07:47:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:18.629 [ 0]:0x1 00:11:18.630 07:47:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:18.630 07:47:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:18.630 07:47:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f2e79f4d859a4bf7a7eecd7a8dba1e2a 00:11:18.630 07:47:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f2e79f4d859a4bf7a7eecd7a8dba1e2a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:18.630 07:47:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:18.630 07:47:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:18.630 07:47:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:18.630 [ 1]:0x2 00:11:18.630 07:47:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:18.630 07:47:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:18.630 07:47:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=012941d50f6b41ebba0a74ae62ab11a5 00:11:18.630 07:47:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 012941d50f6b41ebba0a74ae62ab11a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:18.630 07:47:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:18.630 07:47:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:18.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.892 07:47:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.151 07:47:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:19.151 07:47:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:19.151 07:47:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d80c1f18-e58f-41b8-a65c-91df61974a48 -a 10.0.0.2 -s 4420 -i 4 00:11:19.410 07:47:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:19.410 07:47:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:19.410 07:47:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:19.410 07:47:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:19.410 07:47:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:19.410 07:47:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:21.315 07:47:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:21.316 07:47:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:21.316 07:47:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:21.316 07:47:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:21.316 07:47:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:21.316 07:47:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:21.316 07:47:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:21.316 07:47:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:21.316 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:21.316 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:21.316 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:21.316 07:47:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:21.316 07:47:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:21.316 07:47:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:21.316 07:47:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:21.316 07:47:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:21.316 07:47:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:21.316 07:47:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:21.316 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:21.316 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:21.575 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:21.575 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:21.575 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:21.575 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.575 07:47:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:21.575 07:47:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:21.575 07:47:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:21.575 07:47:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:21.575 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:21.575 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:21.575 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:21.575 [ 0]:0x2 00:11:21.575 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:21.575 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:21.575 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=012941d50f6b41ebba0a74ae62ab11a5 00:11:21.575 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 012941d50f6b41ebba0a74ae62ab11a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.575 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:21.834 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:21.834 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:21.834 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:21.834 [ 0]:0x1 00:11:21.834 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:21.834 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:21.834 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f2e79f4d859a4bf7a7eecd7a8dba1e2a 00:11:21.834 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f2e79f4d859a4bf7a7eecd7a8dba1e2a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.834 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:21.834 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:21.834 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:21.834 [ 1]:0x2 00:11:21.834 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:21.834 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:21.834 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=012941d50f6b41ebba0a74ae62ab11a5 00:11:21.834 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 012941d50f6b41ebba0a74ae62ab11a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.834 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:22.093 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:22.093 07:47:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:22.093 07:47:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:22.093 07:47:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:22.093 07:47:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:22.093 07:47:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:22.093 07:47:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:22.093 07:47:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:22.093 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:22.093 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:22.093 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:22.093 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:22.093 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:22.093 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:22.093 07:47:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:22.093 07:47:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:22.093 07:47:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:22.093 07:47:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:22.093 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:22.093 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:22.094 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:22.094 [ 0]:0x2 00:11:22.094 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:22.094 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:22.094 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=012941d50f6b41ebba0a74ae62ab11a5 00:11:22.094 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 012941d50f6b41ebba0a74ae62ab11a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:22.094 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:22.094 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:22.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.094 07:47:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:22.352 07:47:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:22.352 07:47:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d80c1f18-e58f-41b8-a65c-91df61974a48 -a 10.0.0.2 -s 4420 -i 4 00:11:22.612 07:47:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:22.612 07:47:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:22.612 07:47:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:22.612 07:47:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:22.612 07:47:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:22.612 07:47:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:24.516 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:24.516 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:24.516 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:24.516 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:24.516 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:24.516 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:24.516 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:24.516 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:24.775 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:24.775 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:24.775 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:24.775 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:24.775 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:24.775 [ 0]:0x1 00:11:24.775 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:24.775 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:24.775 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f2e79f4d859a4bf7a7eecd7a8dba1e2a 00:11:24.775 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f2e79f4d859a4bf7a7eecd7a8dba1e2a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:24.775 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:24.775 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:24.775 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:24.775 [ 1]:0x2 00:11:24.775 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:24.775 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:24.775 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=012941d50f6b41ebba0a74ae62ab11a5 00:11:24.775 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 012941d50f6b41ebba0a74ae62ab11a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:24.775 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:25.035 [ 0]:0x2 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=012941d50f6b41ebba0a74ae62ab11a5 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 012941d50f6b41ebba0a74ae62ab11a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:25.035 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:25.294 [2024-07-15 07:47:09.924549] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:25.294 request: 00:11:25.294 { 00:11:25.294 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:25.294 "nsid": 2, 00:11:25.294 "host": "nqn.2016-06.io.spdk:host1", 00:11:25.294 "method": "nvmf_ns_remove_host", 00:11:25.294 "req_id": 1 00:11:25.294 } 00:11:25.294 Got JSON-RPC error response 00:11:25.294 response: 00:11:25.294 { 00:11:25.294 "code": -32602, 00:11:25.294 "message": "Invalid parameters" 00:11:25.294 } 00:11:25.294 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:25.294 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:25.294 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:25.294 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:25.294 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:25.294 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:25.294 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:25.294 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:25.294 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:25.294 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:25.294 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:25.295 07:47:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:25.295 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:25.295 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:25.295 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:25.295 07:47:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:25.295 07:47:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:25.295 07:47:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:25.295 07:47:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:25.295 07:47:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:25.295 07:47:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:25.295 07:47:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:25.295 07:47:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:25.295 07:47:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:25.295 07:47:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:25.295 [ 0]:0x2 00:11:25.295 07:47:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:25.295 07:47:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:25.554 07:47:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=012941d50f6b41ebba0a74ae62ab11a5 00:11:25.554 07:47:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 012941d50f6b41ebba0a74ae62ab11a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:25.554 07:47:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:25.554 07:47:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:25.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.554 07:47:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3168961 00:11:25.554 07:47:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.554 07:47:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3168961 /var/tmp/host.sock 00:11:25.554 07:47:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:25.554 07:47:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3168961 ']' 00:11:25.554 07:47:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:11:25.554 07:47:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:25.554 07:47:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:25.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:25.554 07:47:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:25.554 07:47:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:25.554 [2024-07-15 07:47:10.153183] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:11:25.554 [2024-07-15 07:47:10.153233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3168961 ] 00:11:25.554 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.554 [2024-07-15 07:47:10.219727] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.554 [2024-07-15 07:47:10.292067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.490 07:47:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:26.490 07:47:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:26.490 07:47:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.490 07:47:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:26.748 07:47:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid d02deb47-97fa-4452-a045-a692cb7fc012 00:11:26.748 07:47:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:26.748 07:47:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D02DEB4797FA4452A045A692CB7FC012 -i 00:11:27.006 07:47:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 2470a4e4-de1d-47b8-b4ed-3b6abf3c7779 00:11:27.006 07:47:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:27.006 07:47:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 2470A4E4DE1D47B8B4ED3B6ABF3C7779 -i 00:11:27.006 07:47:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:27.299 07:47:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:27.557 07:47:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:27.557 07:47:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:27.814 nvme0n1 00:11:27.814 07:47:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:27.814 07:47:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:28.380 nvme1n2 00:11:28.380 07:47:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:28.380 07:47:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:28.380 07:47:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:28.380 07:47:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:28.380 07:47:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:28.380 07:47:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:28.380 07:47:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:28.380 07:47:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:28.380 07:47:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:28.637 07:47:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ d02deb47-97fa-4452-a045-a692cb7fc012 == \d\0\2\d\e\b\4\7\-\9\7\f\a\-\4\4\5\2\-\a\0\4\5\-\a\6\9\2\c\b\7\f\c\0\1\2 ]] 00:11:28.637 07:47:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:28.637 07:47:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:28.637 07:47:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:28.895 07:47:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 2470a4e4-de1d-47b8-b4ed-3b6abf3c7779 == \2\4\7\0\a\4\e\4\-\d\e\1\d\-\4\7\b\8\-\b\4\e\d\-\3\b\6\a\b\f\3\c\7\7\7\9 ]] 00:11:28.895 07:47:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3168961 00:11:28.895 07:47:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3168961 ']' 00:11:28.895 07:47:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3168961 00:11:28.895 07:47:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:28.895 07:47:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:28.896 07:47:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3168961 00:11:28.896 07:47:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:28.896 07:47:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:28.896 07:47:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3168961' 00:11:28.896 killing process with pid 3168961 00:11:28.896 07:47:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3168961 00:11:28.896 07:47:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3168961 00:11:29.154 07:47:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.412 07:47:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:11:29.413 07:47:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:11:29.413 07:47:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:29.413 07:47:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:29.413 07:47:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:29.413 07:47:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:29.413 07:47:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:29.413 07:47:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:29.413 rmmod nvme_tcp 00:11:29.413 rmmod nvme_fabrics 00:11:29.413 rmmod nvme_keyring 00:11:29.413 07:47:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:29.413 07:47:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:29.413 07:47:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:29.413 07:47:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3166947 ']' 00:11:29.413 07:47:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3166947 00:11:29.413 07:47:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3166947 ']' 00:11:29.413 07:47:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3166947 00:11:29.413 07:47:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:29.413 07:47:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:29.413 07:47:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3166947 00:11:29.413 07:47:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:29.413 07:47:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:29.413 07:47:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3166947' 00:11:29.413 killing process with pid 3166947 00:11:29.413 07:47:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3166947 00:11:29.413 07:47:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3166947 00:11:29.674 07:47:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:29.674 07:47:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:29.674 07:47:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:29.674 07:47:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:29.674 07:47:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:29.674 07:47:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.674 07:47:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.674 07:47:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.646 07:47:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:31.646 00:11:31.646 real 0m23.680s 00:11:31.646 user 0m25.537s 00:11:31.646 sys 0m6.565s 00:11:31.646 07:47:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:31.646 07:47:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:31.646 ************************************ 00:11:31.646 END TEST nvmf_ns_masking 00:11:31.646 ************************************ 00:11:31.906 07:47:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:31.906 07:47:16 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:31.906 07:47:16 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:31.906 07:47:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:31.906 07:47:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:31.906 07:47:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:31.906 ************************************ 00:11:31.906 START TEST nvmf_nvme_cli 00:11:31.906 ************************************ 00:11:31.906 07:47:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:31.906 * Looking for test storage... 00:11:31.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.906 07:47:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.906 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:31.906 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.906 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.906 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.906 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.906 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.906 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:31.907 07:47:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:38.480 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:38.480 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:38.480 Found net devices under 0000:86:00.0: cvl_0_0 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:38.480 Found net devices under 0000:86:00.1: cvl_0_1 00:11:38.480 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:38.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:11:38.481 00:11:38.481 --- 10.0.0.2 ping statistics --- 00:11:38.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.481 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:38.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:11:38.481 00:11:38.481 --- 10.0.0.1 ping statistics --- 00:11:38.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.481 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3173122 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3173122 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 3173122 ']' 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:38.481 07:47:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:38.481 [2024-07-15 07:47:22.402795] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:11:38.481 [2024-07-15 07:47:22.402842] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.481 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.481 [2024-07-15 07:47:22.476019] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:38.481 [2024-07-15 07:47:22.552385] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.481 [2024-07-15 07:47:22.552425] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.481 [2024-07-15 07:47:22.552432] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.481 [2024-07-15 07:47:22.552439] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.481 [2024-07-15 07:47:22.552444] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.481 [2024-07-15 07:47:22.552504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.481 [2024-07-15 07:47:22.552609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.481 [2024-07-15 07:47:22.552715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.481 [2024-07-15 07:47:22.552716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.481 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:38.481 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:11:38.481 07:47:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:38.481 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:38.481 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:38.738 07:47:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.738 07:47:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:38.738 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.738 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:38.738 [2024-07-15 07:47:23.251203] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.738 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.738 07:47:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:38.738 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.738 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:38.738 Malloc0 00:11:38.738 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.738 07:47:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:38.738 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.738 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:38.738 Malloc1 00:11:38.738 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.738 07:47:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:38.738 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.738 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:38.738 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.738 07:47:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:38.738 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:38.739 [2024-07-15 07:47:23.332818] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:38.739 00:11:38.739 Discovery Log Number of Records 2, Generation counter 2 00:11:38.739 =====Discovery Log Entry 0====== 00:11:38.739 trtype: tcp 00:11:38.739 adrfam: ipv4 00:11:38.739 subtype: current discovery subsystem 00:11:38.739 treq: not required 00:11:38.739 portid: 0 00:11:38.739 trsvcid: 4420 00:11:38.739 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:38.739 traddr: 10.0.0.2 00:11:38.739 eflags: explicit discovery connections, duplicate discovery information 00:11:38.739 sectype: none 00:11:38.739 =====Discovery Log Entry 1====== 00:11:38.739 trtype: tcp 00:11:38.739 adrfam: ipv4 00:11:38.739 subtype: nvme subsystem 00:11:38.739 treq: not required 00:11:38.739 portid: 0 00:11:38.739 trsvcid: 4420 00:11:38.739 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:38.739 traddr: 10.0.0.2 00:11:38.739 eflags: none 00:11:38.739 sectype: none 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:38.739 07:47:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:40.115 07:47:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:40.115 07:47:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:11:40.115 07:47:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:40.115 07:47:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:40.115 07:47:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:40.115 07:47:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:42.021 /dev/nvme0n1 ]] 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:42.021 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:42.022 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:42.022 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:42.022 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:42.022 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:42.022 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:42.022 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:42.022 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:42.022 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:42.022 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:42.022 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:42.022 07:47:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:42.022 07:47:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:42.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:42.281 rmmod nvme_tcp 00:11:42.281 rmmod nvme_fabrics 00:11:42.281 rmmod nvme_keyring 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3173122 ']' 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3173122 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 3173122 ']' 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 3173122 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:42.281 07:47:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3173122 00:11:42.281 07:47:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:42.281 07:47:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:42.281 07:47:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3173122' 00:11:42.281 killing process with pid 3173122 00:11:42.281 07:47:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 3173122 00:11:42.281 07:47:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 3173122 00:11:42.541 07:47:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:42.541 07:47:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:42.541 07:47:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:42.541 07:47:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:42.541 07:47:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:42.541 07:47:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.541 07:47:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:42.541 07:47:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.077 07:47:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:45.077 00:11:45.077 real 0m12.862s 00:11:45.077 user 0m20.210s 00:11:45.077 sys 0m4.989s 00:11:45.077 07:47:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:45.077 07:47:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:45.077 ************************************ 00:11:45.077 END TEST nvmf_nvme_cli 00:11:45.077 ************************************ 00:11:45.077 07:47:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:45.077 07:47:29 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:11:45.077 07:47:29 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:45.077 07:47:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:45.077 07:47:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.077 07:47:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:45.077 ************************************ 00:11:45.077 START TEST nvmf_vfio_user 00:11:45.077 ************************************ 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:45.077 * Looking for test storage... 00:11:45.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3174488 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3174488' 00:11:45.077 Process pid: 3174488 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3174488 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3174488 ']' 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:45.077 07:47:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:45.077 [2024-07-15 07:47:29.547663] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:11:45.077 [2024-07-15 07:47:29.547713] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:45.077 EAL: No free 2048 kB hugepages reported on node 1 00:11:45.077 [2024-07-15 07:47:29.596709] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:45.078 [2024-07-15 07:47:29.669442] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:45.078 [2024-07-15 07:47:29.669482] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:45.078 [2024-07-15 07:47:29.669489] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:45.078 [2024-07-15 07:47:29.669494] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:45.078 [2024-07-15 07:47:29.669499] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:45.078 [2024-07-15 07:47:29.669555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.078 [2024-07-15 07:47:29.669751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.078 [2024-07-15 07:47:29.669666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:45.078 [2024-07-15 07:47:29.669753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:45.646 07:47:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:45.646 07:47:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:11:45.646 07:47:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:47.023 07:47:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:47.023 07:47:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:47.023 07:47:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:47.023 07:47:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:47.023 07:47:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:47.023 07:47:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:47.023 Malloc1 00:11:47.023 07:47:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:47.282 07:47:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:47.541 07:47:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:47.800 07:47:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:47.800 07:47:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:47.800 07:47:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:47.800 Malloc2 00:11:47.800 07:47:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:48.060 07:47:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:48.319 07:47:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:48.319 07:47:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:48.319 07:47:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:48.319 07:47:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:48.319 07:47:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:48.319 07:47:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:48.319 07:47:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:48.581 [2024-07-15 07:47:33.076678] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:11:48.581 [2024-07-15 07:47:33.076725] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3174981 ] 00:11:48.581 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.581 [2024-07-15 07:47:33.108132] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:48.581 [2024-07-15 07:47:33.116601] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:48.581 [2024-07-15 07:47:33.116621] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ffb0183a000 00:11:48.581 [2024-07-15 07:47:33.117602] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:48.581 [2024-07-15 07:47:33.118605] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:48.581 [2024-07-15 07:47:33.119608] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:48.581 [2024-07-15 07:47:33.120613] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:48.581 [2024-07-15 07:47:33.121619] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:48.581 [2024-07-15 07:47:33.122614] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:48.581 [2024-07-15 07:47:33.123626] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:48.581 [2024-07-15 07:47:33.124626] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:48.581 [2024-07-15 07:47:33.125639] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:48.581 [2024-07-15 07:47:33.125647] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ffb0182f000 00:11:48.581 [2024-07-15 07:47:33.126589] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:48.581 [2024-07-15 07:47:33.139201] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:48.581 [2024-07-15 07:47:33.139227] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:48.581 [2024-07-15 07:47:33.144757] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:48.581 [2024-07-15 07:47:33.144794] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:48.581 [2024-07-15 07:47:33.144865] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:48.581 [2024-07-15 07:47:33.144883] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:48.581 [2024-07-15 07:47:33.144892] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:48.581 [2024-07-15 07:47:33.145760] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:48.581 [2024-07-15 07:47:33.145768] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:48.581 [2024-07-15 07:47:33.145775] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:48.582 [2024-07-15 07:47:33.146761] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:48.582 [2024-07-15 07:47:33.146768] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:48.582 [2024-07-15 07:47:33.146775] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:48.582 [2024-07-15 07:47:33.147773] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:48.582 [2024-07-15 07:47:33.147781] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:48.582 [2024-07-15 07:47:33.148780] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:48.582 [2024-07-15 07:47:33.148788] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:48.582 [2024-07-15 07:47:33.148792] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:48.582 [2024-07-15 07:47:33.148798] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:48.582 [2024-07-15 07:47:33.148903] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:48.582 [2024-07-15 07:47:33.148907] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:48.582 [2024-07-15 07:47:33.148911] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:48.582 [2024-07-15 07:47:33.149785] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:48.582 [2024-07-15 07:47:33.150796] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:48.582 [2024-07-15 07:47:33.151801] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:48.582 [2024-07-15 07:47:33.152804] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:48.582 [2024-07-15 07:47:33.152883] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:48.582 [2024-07-15 07:47:33.153814] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:48.582 [2024-07-15 07:47:33.153821] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:48.582 [2024-07-15 07:47:33.153825] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:48.582 [2024-07-15 07:47:33.153845] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:48.582 [2024-07-15 07:47:33.153856] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:48.582 [2024-07-15 07:47:33.153870] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:48.582 [2024-07-15 07:47:33.153875] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:48.582 [2024-07-15 07:47:33.153889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:48.582 [2024-07-15 07:47:33.153929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:48.582 [2024-07-15 07:47:33.153939] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:48.582 [2024-07-15 07:47:33.153947] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:48.582 [2024-07-15 07:47:33.153952] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:48.582 [2024-07-15 07:47:33.153956] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:48.582 [2024-07-15 07:47:33.153960] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:48.582 [2024-07-15 07:47:33.153965] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:48.582 [2024-07-15 07:47:33.153969] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:48.582 [2024-07-15 07:47:33.153976] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:48.582 [2024-07-15 07:47:33.153985] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:48.582 [2024-07-15 07:47:33.154002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:48.582 [2024-07-15 07:47:33.154015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.582 [2024-07-15 07:47:33.154022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.582 [2024-07-15 07:47:33.154029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.582 [2024-07-15 07:47:33.154037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.582 [2024-07-15 07:47:33.154041] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:48.582 [2024-07-15 07:47:33.154049] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:48.582 [2024-07-15 07:47:33.154057] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:48.582 [2024-07-15 07:47:33.154064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:48.582 [2024-07-15 07:47:33.154070] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:48.582 [2024-07-15 07:47:33.154076] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:48.582 [2024-07-15 07:47:33.154082] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:48.582 [2024-07-15 07:47:33.154087] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:48.582 [2024-07-15 07:47:33.154095] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:48.582 [2024-07-15 07:47:33.154103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:48.582 [2024-07-15 07:47:33.154151] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:48.582 [2024-07-15 07:47:33.154158] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:48.582 [2024-07-15 07:47:33.154164] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:48.582 [2024-07-15 07:47:33.154168] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:48.582 [2024-07-15 07:47:33.154174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:48.582 [2024-07-15 07:47:33.154186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:48.582 [2024-07-15 07:47:33.154195] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:48.583 [2024-07-15 07:47:33.154203] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:48.583 [2024-07-15 07:47:33.154210] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:48.583 [2024-07-15 07:47:33.154216] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:48.583 [2024-07-15 07:47:33.154219] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:48.583 [2024-07-15 07:47:33.154229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:48.583 [2024-07-15 07:47:33.154246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:48.583 [2024-07-15 07:47:33.154258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:48.583 [2024-07-15 07:47:33.154265] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:48.583 [2024-07-15 07:47:33.154271] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:48.583 [2024-07-15 07:47:33.154274] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:48.583 [2024-07-15 07:47:33.154280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:48.583 [2024-07-15 07:47:33.154295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:48.583 [2024-07-15 07:47:33.154302] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:48.583 [2024-07-15 07:47:33.154308] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:48.583 [2024-07-15 07:47:33.154318] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:48.583 [2024-07-15 07:47:33.154323] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:11:48.583 [2024-07-15 07:47:33.154327] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:48.583 [2024-07-15 07:47:33.154332] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:48.583 [2024-07-15 07:47:33.154336] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:48.583 [2024-07-15 07:47:33.154340] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:48.583 [2024-07-15 07:47:33.154345] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:48.583 [2024-07-15 07:47:33.154362] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:48.583 [2024-07-15 07:47:33.154371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:48.583 [2024-07-15 07:47:33.154381] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:48.583 [2024-07-15 07:47:33.154389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:48.583 [2024-07-15 07:47:33.154399] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:48.583 [2024-07-15 07:47:33.154409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:48.583 [2024-07-15 07:47:33.154418] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:48.583 [2024-07-15 07:47:33.154428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:48.583 [2024-07-15 07:47:33.154440] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:48.583 [2024-07-15 07:47:33.154444] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:48.583 [2024-07-15 07:47:33.154447] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:48.583 [2024-07-15 07:47:33.154450] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:48.583 [2024-07-15 07:47:33.154455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:48.583 [2024-07-15 07:47:33.154462] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:48.583 [2024-07-15 07:47:33.154465] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:48.583 [2024-07-15 07:47:33.154471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:48.583 [2024-07-15 07:47:33.154477] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:48.583 [2024-07-15 07:47:33.154481] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:48.583 [2024-07-15 07:47:33.154486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:48.583 [2024-07-15 07:47:33.154495] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:48.583 [2024-07-15 07:47:33.154498] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:48.583 [2024-07-15 07:47:33.154504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:48.583 [2024-07-15 07:47:33.154510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:48.583 [2024-07-15 07:47:33.154520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:48.583 [2024-07-15 07:47:33.154530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:48.583 [2024-07-15 07:47:33.154537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:48.583 ===================================================== 00:11:48.583 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:48.583 ===================================================== 00:11:48.583 Controller Capabilities/Features 00:11:48.583 ================================ 00:11:48.583 Vendor ID: 4e58 00:11:48.583 Subsystem Vendor ID: 4e58 00:11:48.583 Serial Number: SPDK1 00:11:48.583 Model Number: SPDK bdev Controller 00:11:48.583 Firmware Version: 24.09 00:11:48.583 Recommended Arb Burst: 6 00:11:48.583 IEEE OUI Identifier: 8d 6b 50 00:11:48.583 Multi-path I/O 00:11:48.583 May have multiple subsystem ports: Yes 00:11:48.583 May have multiple controllers: Yes 00:11:48.583 Associated with SR-IOV VF: No 00:11:48.583 Max Data Transfer Size: 131072 00:11:48.583 Max Number of Namespaces: 32 00:11:48.583 Max Number of I/O Queues: 127 00:11:48.583 NVMe Specification Version (VS): 1.3 00:11:48.583 NVMe Specification Version (Identify): 1.3 00:11:48.583 Maximum Queue Entries: 256 00:11:48.583 Contiguous Queues Required: Yes 00:11:48.583 Arbitration Mechanisms Supported 00:11:48.583 Weighted Round Robin: Not Supported 00:11:48.583 Vendor Specific: Not Supported 00:11:48.583 Reset Timeout: 15000 ms 00:11:48.584 Doorbell Stride: 4 bytes 00:11:48.584 NVM Subsystem Reset: Not Supported 00:11:48.584 Command Sets Supported 00:11:48.584 NVM Command Set: Supported 00:11:48.584 Boot Partition: Not Supported 00:11:48.584 Memory Page Size Minimum: 4096 bytes 00:11:48.584 Memory Page Size Maximum: 4096 bytes 00:11:48.584 Persistent Memory Region: Not Supported 00:11:48.584 Optional Asynchronous Events Supported 00:11:48.584 Namespace Attribute Notices: Supported 00:11:48.584 Firmware Activation Notices: Not Supported 00:11:48.584 ANA Change Notices: Not Supported 00:11:48.584 PLE Aggregate Log Change Notices: Not Supported 00:11:48.584 LBA Status Info Alert Notices: Not Supported 00:11:48.584 EGE Aggregate Log Change Notices: Not Supported 00:11:48.584 Normal NVM Subsystem Shutdown event: Not Supported 00:11:48.584 Zone Descriptor Change Notices: Not Supported 00:11:48.584 Discovery Log Change Notices: Not Supported 00:11:48.584 Controller Attributes 00:11:48.584 128-bit Host Identifier: Supported 00:11:48.584 Non-Operational Permissive Mode: Not Supported 00:11:48.584 NVM Sets: Not Supported 00:11:48.584 Read Recovery Levels: Not Supported 00:11:48.584 Endurance Groups: Not Supported 00:11:48.584 Predictable Latency Mode: Not Supported 00:11:48.584 Traffic Based Keep ALive: Not Supported 00:11:48.584 Namespace Granularity: Not Supported 00:11:48.584 SQ Associations: Not Supported 00:11:48.584 UUID List: Not Supported 00:11:48.584 Multi-Domain Subsystem: Not Supported 00:11:48.584 Fixed Capacity Management: Not Supported 00:11:48.584 Variable Capacity Management: Not Supported 00:11:48.584 Delete Endurance Group: Not Supported 00:11:48.584 Delete NVM Set: Not Supported 00:11:48.584 Extended LBA Formats Supported: Not Supported 00:11:48.584 Flexible Data Placement Supported: Not Supported 00:11:48.584 00:11:48.584 Controller Memory Buffer Support 00:11:48.584 ================================ 00:11:48.584 Supported: No 00:11:48.584 00:11:48.584 Persistent Memory Region Support 00:11:48.584 ================================ 00:11:48.584 Supported: No 00:11:48.584 00:11:48.584 Admin Command Set Attributes 00:11:48.584 ============================ 00:11:48.584 Security Send/Receive: Not Supported 00:11:48.584 Format NVM: Not Supported 00:11:48.584 Firmware Activate/Download: Not Supported 00:11:48.584 Namespace Management: Not Supported 00:11:48.584 Device Self-Test: Not Supported 00:11:48.584 Directives: Not Supported 00:11:48.584 NVMe-MI: Not Supported 00:11:48.584 Virtualization Management: Not Supported 00:11:48.584 Doorbell Buffer Config: Not Supported 00:11:48.584 Get LBA Status Capability: Not Supported 00:11:48.584 Command & Feature Lockdown Capability: Not Supported 00:11:48.584 Abort Command Limit: 4 00:11:48.584 Async Event Request Limit: 4 00:11:48.584 Number of Firmware Slots: N/A 00:11:48.584 Firmware Slot 1 Read-Only: N/A 00:11:48.584 Firmware Activation Without Reset: N/A 00:11:48.584 Multiple Update Detection Support: N/A 00:11:48.584 Firmware Update Granularity: No Information Provided 00:11:48.584 Per-Namespace SMART Log: No 00:11:48.584 Asymmetric Namespace Access Log Page: Not Supported 00:11:48.584 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:48.584 Command Effects Log Page: Supported 00:11:48.584 Get Log Page Extended Data: Supported 00:11:48.584 Telemetry Log Pages: Not Supported 00:11:48.584 Persistent Event Log Pages: Not Supported 00:11:48.584 Supported Log Pages Log Page: May Support 00:11:48.584 Commands Supported & Effects Log Page: Not Supported 00:11:48.584 Feature Identifiers & Effects Log Page:May Support 00:11:48.584 NVMe-MI Commands & Effects Log Page: May Support 00:11:48.584 Data Area 4 for Telemetry Log: Not Supported 00:11:48.584 Error Log Page Entries Supported: 128 00:11:48.584 Keep Alive: Supported 00:11:48.584 Keep Alive Granularity: 10000 ms 00:11:48.584 00:11:48.584 NVM Command Set Attributes 00:11:48.584 ========================== 00:11:48.584 Submission Queue Entry Size 00:11:48.584 Max: 64 00:11:48.584 Min: 64 00:11:48.584 Completion Queue Entry Size 00:11:48.584 Max: 16 00:11:48.584 Min: 16 00:11:48.584 Number of Namespaces: 32 00:11:48.584 Compare Command: Supported 00:11:48.584 Write Uncorrectable Command: Not Supported 00:11:48.584 Dataset Management Command: Supported 00:11:48.584 Write Zeroes Command: Supported 00:11:48.584 Set Features Save Field: Not Supported 00:11:48.584 Reservations: Not Supported 00:11:48.584 Timestamp: Not Supported 00:11:48.584 Copy: Supported 00:11:48.584 Volatile Write Cache: Present 00:11:48.584 Atomic Write Unit (Normal): 1 00:11:48.584 Atomic Write Unit (PFail): 1 00:11:48.584 Atomic Compare & Write Unit: 1 00:11:48.584 Fused Compare & Write: Supported 00:11:48.584 Scatter-Gather List 00:11:48.584 SGL Command Set: Supported (Dword aligned) 00:11:48.584 SGL Keyed: Not Supported 00:11:48.584 SGL Bit Bucket Descriptor: Not Supported 00:11:48.584 SGL Metadata Pointer: Not Supported 00:11:48.584 Oversized SGL: Not Supported 00:11:48.584 SGL Metadata Address: Not Supported 00:11:48.584 SGL Offset: Not Supported 00:11:48.584 Transport SGL Data Block: Not Supported 00:11:48.584 Replay Protected Memory Block: Not Supported 00:11:48.584 00:11:48.584 Firmware Slot Information 00:11:48.584 ========================= 00:11:48.584 Active slot: 1 00:11:48.584 Slot 1 Firmware Revision: 24.09 00:11:48.584 00:11:48.584 00:11:48.584 Commands Supported and Effects 00:11:48.584 ============================== 00:11:48.584 Admin Commands 00:11:48.584 -------------- 00:11:48.584 Get Log Page (02h): Supported 00:11:48.584 Identify (06h): Supported 00:11:48.584 Abort (08h): Supported 00:11:48.584 Set Features (09h): Supported 00:11:48.585 Get Features (0Ah): Supported 00:11:48.585 Asynchronous Event Request (0Ch): Supported 00:11:48.585 Keep Alive (18h): Supported 00:11:48.585 I/O Commands 00:11:48.585 ------------ 00:11:48.585 Flush (00h): Supported LBA-Change 00:11:48.585 Write (01h): Supported LBA-Change 00:11:48.585 Read (02h): Supported 00:11:48.585 Compare (05h): Supported 00:11:48.585 Write Zeroes (08h): Supported LBA-Change 00:11:48.585 Dataset Management (09h): Supported LBA-Change 00:11:48.585 Copy (19h): Supported LBA-Change 00:11:48.585 00:11:48.585 Error Log 00:11:48.585 ========= 00:11:48.585 00:11:48.585 Arbitration 00:11:48.585 =========== 00:11:48.585 Arbitration Burst: 1 00:11:48.585 00:11:48.585 Power Management 00:11:48.585 ================ 00:11:48.585 Number of Power States: 1 00:11:48.585 Current Power State: Power State #0 00:11:48.585 Power State #0: 00:11:48.585 Max Power: 0.00 W 00:11:48.585 Non-Operational State: Operational 00:11:48.585 Entry Latency: Not Reported 00:11:48.585 Exit Latency: Not Reported 00:11:48.585 Relative Read Throughput: 0 00:11:48.585 Relative Read Latency: 0 00:11:48.585 Relative Write Throughput: 0 00:11:48.585 Relative Write Latency: 0 00:11:48.585 Idle Power: Not Reported 00:11:48.585 Active Power: Not Reported 00:11:48.585 Non-Operational Permissive Mode: Not Supported 00:11:48.585 00:11:48.585 Health Information 00:11:48.585 ================== 00:11:48.585 Critical Warnings: 00:11:48.585 Available Spare Space: OK 00:11:48.585 Temperature: OK 00:11:48.585 Device Reliability: OK 00:11:48.585 Read Only: No 00:11:48.585 Volatile Memory Backup: OK 00:11:48.585 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:48.585 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:48.585 Available Spare: 0% 00:11:48.585 Available Sp[2024-07-15 07:47:33.154625] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:48.585 [2024-07-15 07:47:33.154633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:48.585 [2024-07-15 07:47:33.154659] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:48.585 [2024-07-15 07:47:33.154668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.585 [2024-07-15 07:47:33.154673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.585 [2024-07-15 07:47:33.154679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.585 [2024-07-15 07:47:33.154684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.585 [2024-07-15 07:47:33.154818] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:48.585 [2024-07-15 07:47:33.154827] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:48.585 [2024-07-15 07:47:33.155826] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:48.585 [2024-07-15 07:47:33.155875] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:48.585 [2024-07-15 07:47:33.155882] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:48.585 [2024-07-15 07:47:33.156833] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:48.585 [2024-07-15 07:47:33.156843] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:48.585 [2024-07-15 07:47:33.156891] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:48.585 [2024-07-15 07:47:33.158865] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:48.585 are Threshold: 0% 00:11:48.585 Life Percentage Used: 0% 00:11:48.585 Data Units Read: 0 00:11:48.585 Data Units Written: 0 00:11:48.585 Host Read Commands: 0 00:11:48.585 Host Write Commands: 0 00:11:48.585 Controller Busy Time: 0 minutes 00:11:48.585 Power Cycles: 0 00:11:48.585 Power On Hours: 0 hours 00:11:48.585 Unsafe Shutdowns: 0 00:11:48.585 Unrecoverable Media Errors: 0 00:11:48.585 Lifetime Error Log Entries: 0 00:11:48.585 Warning Temperature Time: 0 minutes 00:11:48.585 Critical Temperature Time: 0 minutes 00:11:48.585 00:11:48.585 Number of Queues 00:11:48.585 ================ 00:11:48.585 Number of I/O Submission Queues: 127 00:11:48.585 Number of I/O Completion Queues: 127 00:11:48.585 00:11:48.585 Active Namespaces 00:11:48.585 ================= 00:11:48.585 Namespace ID:1 00:11:48.585 Error Recovery Timeout: Unlimited 00:11:48.585 Command Set Identifier: NVM (00h) 00:11:48.585 Deallocate: Supported 00:11:48.585 Deallocated/Unwritten Error: Not Supported 00:11:48.585 Deallocated Read Value: Unknown 00:11:48.585 Deallocate in Write Zeroes: Not Supported 00:11:48.585 Deallocated Guard Field: 0xFFFF 00:11:48.585 Flush: Supported 00:11:48.585 Reservation: Supported 00:11:48.585 Namespace Sharing Capabilities: Multiple Controllers 00:11:48.585 Size (in LBAs): 131072 (0GiB) 00:11:48.585 Capacity (in LBAs): 131072 (0GiB) 00:11:48.585 Utilization (in LBAs): 131072 (0GiB) 00:11:48.585 NGUID: B128144F450A420F9C5A713BF15ED996 00:11:48.585 UUID: b128144f-450a-420f-9c5a-713bf15ed996 00:11:48.585 Thin Provisioning: Not Supported 00:11:48.585 Per-NS Atomic Units: Yes 00:11:48.585 Atomic Boundary Size (Normal): 0 00:11:48.585 Atomic Boundary Size (PFail): 0 00:11:48.585 Atomic Boundary Offset: 0 00:11:48.585 Maximum Single Source Range Length: 65535 00:11:48.585 Maximum Copy Length: 65535 00:11:48.585 Maximum Source Range Count: 1 00:11:48.585 NGUID/EUI64 Never Reused: No 00:11:48.585 Namespace Write Protected: No 00:11:48.585 Number of LBA Formats: 1 00:11:48.585 Current LBA Format: LBA Format #00 00:11:48.585 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:48.585 00:11:48.586 07:47:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:48.586 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.843 [2024-07-15 07:47:33.371978] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:54.115 Initializing NVMe Controllers 00:11:54.115 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:54.115 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:54.115 Initialization complete. Launching workers. 00:11:54.115 ======================================================== 00:11:54.115 Latency(us) 00:11:54.115 Device Information : IOPS MiB/s Average min max 00:11:54.115 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39934.97 156.00 3205.02 967.10 6767.78 00:11:54.115 ======================================================== 00:11:54.115 Total : 39934.97 156.00 3205.02 967.10 6767.78 00:11:54.115 00:11:54.115 [2024-07-15 07:47:38.391967] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:54.115 07:47:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:54.115 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.115 [2024-07-15 07:47:38.618012] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:59.405 Initializing NVMe Controllers 00:11:59.405 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:59.405 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:59.405 Initialization complete. Launching workers. 00:11:59.405 ======================================================== 00:11:59.405 Latency(us) 00:11:59.405 Device Information : IOPS MiB/s Average min max 00:11:59.405 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.06 62.70 7979.88 6985.68 8975.19 00:11:59.405 ======================================================== 00:11:59.405 Total : 16051.06 62.70 7979.88 6985.68 8975.19 00:11:59.405 00:11:59.405 [2024-07-15 07:47:43.660388] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:59.405 07:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:59.405 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.405 [2024-07-15 07:47:43.858326] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:04.677 [2024-07-15 07:47:48.928544] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:04.677 Initializing NVMe Controllers 00:12:04.677 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:04.677 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:04.677 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:04.677 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:04.677 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:04.677 Initialization complete. Launching workers. 00:12:04.677 Starting thread on core 2 00:12:04.677 Starting thread on core 3 00:12:04.677 Starting thread on core 1 00:12:04.677 07:47:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:04.677 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.677 [2024-07-15 07:47:49.207600] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:07.967 [2024-07-15 07:47:52.272065] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:07.967 Initializing NVMe Controllers 00:12:07.967 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:07.967 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:07.967 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:07.967 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:07.967 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:07.967 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:07.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:07.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:07.967 Initialization complete. Launching workers. 00:12:07.967 Starting thread on core 1 with urgent priority queue 00:12:07.967 Starting thread on core 2 with urgent priority queue 00:12:07.967 Starting thread on core 3 with urgent priority queue 00:12:07.967 Starting thread on core 0 with urgent priority queue 00:12:07.967 SPDK bdev Controller (SPDK1 ) core 0: 7801.67 IO/s 12.82 secs/100000 ios 00:12:07.967 SPDK bdev Controller (SPDK1 ) core 1: 8930.67 IO/s 11.20 secs/100000 ios 00:12:07.967 SPDK bdev Controller (SPDK1 ) core 2: 10209.67 IO/s 9.79 secs/100000 ios 00:12:07.967 SPDK bdev Controller (SPDK1 ) core 3: 8197.00 IO/s 12.20 secs/100000 ios 00:12:07.967 ======================================================== 00:12:07.967 00:12:07.967 07:47:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:07.967 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.967 [2024-07-15 07:47:52.547746] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:07.967 Initializing NVMe Controllers 00:12:07.967 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:07.967 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:07.967 Namespace ID: 1 size: 0GB 00:12:07.967 Initialization complete. 00:12:07.967 INFO: using host memory buffer for IO 00:12:07.967 Hello world! 00:12:07.967 [2024-07-15 07:47:52.580968] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:07.967 07:47:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:07.967 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.226 [2024-07-15 07:47:52.840347] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:09.163 Initializing NVMe Controllers 00:12:09.163 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:09.163 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:09.163 Initialization complete. Launching workers. 00:12:09.163 submit (in ns) avg, min, max = 6139.6, 3247.0, 4000230.4 00:12:09.163 complete (in ns) avg, min, max = 22380.7, 1750.4, 6990198.3 00:12:09.163 00:12:09.163 Submit histogram 00:12:09.163 ================ 00:12:09.163 Range in us Cumulative Count 00:12:09.163 3.242 - 3.256: 0.0061% ( 1) 00:12:09.163 3.256 - 3.270: 0.0121% ( 1) 00:12:09.163 3.283 - 3.297: 0.0364% ( 4) 00:12:09.163 3.297 - 3.311: 0.3704% ( 55) 00:12:09.163 3.311 - 3.325: 2.3561% ( 327) 00:12:09.163 3.325 - 3.339: 6.9468% ( 756) 00:12:09.163 3.339 - 3.353: 12.6488% ( 939) 00:12:09.163 3.353 - 3.367: 18.8122% ( 1015) 00:12:09.163 3.367 - 3.381: 25.3765% ( 1081) 00:12:09.163 3.381 - 3.395: 30.8356% ( 899) 00:12:09.163 3.395 - 3.409: 36.2764% ( 896) 00:12:09.163 3.409 - 3.423: 41.7051% ( 894) 00:12:09.163 3.423 - 3.437: 45.9922% ( 706) 00:12:09.163 3.437 - 3.450: 50.6983% ( 775) 00:12:09.163 3.450 - 3.464: 56.5885% ( 970) 00:12:09.163 3.464 - 3.478: 63.2378% ( 1095) 00:12:09.163 3.478 - 3.492: 67.7253% ( 739) 00:12:09.163 3.492 - 3.506: 72.2188% ( 740) 00:12:09.163 3.506 - 3.520: 77.5808% ( 883) 00:12:09.163 3.520 - 3.534: 81.6068% ( 663) 00:12:09.163 3.534 - 3.548: 84.5761% ( 489) 00:12:09.163 3.548 - 3.562: 86.2643% ( 278) 00:12:09.163 3.562 - 3.590: 87.6852% ( 234) 00:12:09.163 3.590 - 3.617: 88.5536% ( 143) 00:12:09.163 3.617 - 3.645: 90.0777% ( 251) 00:12:09.163 3.645 - 3.673: 91.9116% ( 302) 00:12:09.163 3.673 - 3.701: 93.6240% ( 282) 00:12:09.163 3.701 - 3.729: 95.2818% ( 273) 00:12:09.163 3.729 - 3.757: 96.7209% ( 237) 00:12:09.163 3.757 - 3.784: 98.0568% ( 220) 00:12:09.163 3.784 - 3.812: 98.7248% ( 110) 00:12:09.163 3.812 - 3.840: 99.2045% ( 79) 00:12:09.163 3.840 - 3.868: 99.5081% ( 50) 00:12:09.163 3.868 - 3.896: 99.6599% ( 25) 00:12:09.163 3.896 - 3.923: 99.7025% ( 7) 00:12:09.163 3.923 - 3.951: 99.7328% ( 5) 00:12:09.163 3.951 - 3.979: 99.7450% ( 2) 00:12:09.163 3.979 - 4.007: 99.7510% ( 1) 00:12:09.163 5.482 - 5.510: 99.7571% ( 1) 00:12:09.163 5.593 - 5.621: 99.7692% ( 2) 00:12:09.163 5.843 - 5.871: 99.7753% ( 1) 00:12:09.163 5.955 - 5.983: 99.7814% ( 1) 00:12:09.163 6.177 - 6.205: 99.7875% ( 1) 00:12:09.163 6.205 - 6.233: 99.7935% ( 1) 00:12:09.163 6.233 - 6.261: 99.7996% ( 1) 00:12:09.163 6.456 - 6.483: 99.8057% ( 1) 00:12:09.163 6.567 - 6.595: 99.8118% ( 1) 00:12:09.163 6.650 - 6.678: 99.8239% ( 2) 00:12:09.163 6.678 - 6.706: 99.8300% ( 1) 00:12:09.163 6.706 - 6.734: 99.8421% ( 2) 00:12:09.163 6.790 - 6.817: 99.8482% ( 1) 00:12:09.163 6.817 - 6.845: 99.8543% ( 1) 00:12:09.163 6.901 - 6.929: 99.8664% ( 2) 00:12:09.163 7.068 - 7.096: 99.8725% ( 1) 00:12:09.163 7.290 - 7.346: 99.8786% ( 1) 00:12:09.163 7.346 - 7.402: 99.8907% ( 2) 00:12:09.163 7.513 - 7.569: 99.9028% ( 2) 00:12:09.163 7.680 - 7.736: 99.9089% ( 1) 00:12:09.163 7.791 - 7.847: 99.9150% ( 1) 00:12:09.163 7.958 - 8.014: 99.9211% ( 1) 00:12:09.163 8.014 - 8.070: 99.9271% ( 1) 00:12:09.163 9.071 - 9.127: 99.9332% ( 1) 00:12:09.163 3989.148 - 4017.642: 100.0000% ( 11) 00:12:09.163 00:12:09.163 Complete histogram 00:12:09.163 ================== 00:12:09.163 Range in us Cumulative Count 00:12:09.163 1.746 - 1.753: 0.0061% ( 1) 00:12:09.163 1.795 - 1.809: 0.1093% ( 17) 00:12:09.163 1.809 - 1.823: 3.5341% ( 564) 00:12:09.163 1.823 - 1.837: 10.1652% ( 1092) 00:12:09.163 1.837 - 1.850: 12.4484% ( 376) 00:12:09.163 1.850 - 1.864: 28.8924% ( 2708) 00:12:09.163 1.864 - 1.878: 77.2893% ( 7970) 00:12:09.163 1.878 - 1.892: 92.5431% ( 2512) 00:12:09.163 1.892 - 1.906: 95.3425% ( 461) 00:12:09.163 1.906 - 1.920: 96.2169% ( 144) 00:12:09.163 1.920 - 1.934: 96.8727% ( 108) 00:12:09.163 1.934 - 1.948: 97.9658% ( 180) 00:12:09.163 1.948 - 1.962: 98.8402% ( 144) 00:12:09.163 1.962 - 1.976: 99.1074% ( 44) 00:12:09.163 1.976 - [2024-07-15 07:47:53.864284] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:09.163 1.990: 99.2106% ( 17) 00:12:09.163 1.990 - 2.003: 99.2531% ( 7) 00:12:09.163 2.003 - 2.017: 99.2592% ( 1) 00:12:09.163 2.143 - 2.157: 99.2713% ( 2) 00:12:09.163 4.007 - 4.035: 99.2774% ( 1) 00:12:09.163 4.424 - 4.452: 99.2835% ( 1) 00:12:09.163 4.536 - 4.563: 99.2895% ( 1) 00:12:09.163 4.591 - 4.619: 99.3017% ( 2) 00:12:09.163 4.897 - 4.925: 99.3138% ( 2) 00:12:09.163 4.953 - 4.981: 99.3260% ( 2) 00:12:09.163 5.092 - 5.120: 99.3320% ( 1) 00:12:09.163 5.148 - 5.176: 99.3381% ( 1) 00:12:09.163 5.203 - 5.231: 99.3442% ( 1) 00:12:09.163 5.259 - 5.287: 99.3503% ( 1) 00:12:09.163 5.287 - 5.315: 99.3563% ( 1) 00:12:09.163 5.510 - 5.537: 99.3624% ( 1) 00:12:09.163 5.537 - 5.565: 99.3685% ( 1) 00:12:09.163 5.677 - 5.704: 99.3806% ( 2) 00:12:09.163 5.760 - 5.788: 99.3867% ( 1) 00:12:09.163 5.788 - 5.816: 99.3928% ( 1) 00:12:09.163 5.843 - 5.871: 99.3988% ( 1) 00:12:09.163 5.927 - 5.955: 99.4049% ( 1) 00:12:09.163 6.038 - 6.066: 99.4110% ( 1) 00:12:09.164 6.094 - 6.122: 99.4231% ( 2) 00:12:09.164 6.122 - 6.150: 99.4292% ( 1) 00:12:09.164 6.177 - 6.205: 99.4353% ( 1) 00:12:09.164 6.344 - 6.372: 99.4413% ( 1) 00:12:09.164 6.400 - 6.428: 99.4535% ( 2) 00:12:09.164 6.511 - 6.539: 99.4596% ( 1) 00:12:09.164 6.539 - 6.567: 99.4656% ( 1) 00:12:09.164 7.012 - 7.040: 99.4717% ( 1) 00:12:09.164 9.405 - 9.461: 99.4778% ( 1) 00:12:09.164 10.184 - 10.240: 99.4838% ( 1) 00:12:09.164 11.297 - 11.353: 99.4899% ( 1) 00:12:09.164 12.967 - 13.023: 99.4960% ( 1) 00:12:09.164 1025.781 - 1032.904: 99.5021% ( 1) 00:12:09.164 3006.108 - 3020.355: 99.5081% ( 1) 00:12:09.164 3034.602 - 3048.849: 99.5142% ( 1) 00:12:09.164 3989.148 - 4017.642: 99.9696% ( 75) 00:12:09.164 4986.435 - 5014.929: 99.9818% ( 2) 00:12:09.164 6981.009 - 7009.503: 100.0000% ( 3) 00:12:09.164 00:12:09.164 07:47:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:09.164 07:47:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:09.164 07:47:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:09.164 07:47:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:09.164 07:47:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:09.423 [ 00:12:09.423 { 00:12:09.423 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:09.423 "subtype": "Discovery", 00:12:09.423 "listen_addresses": [], 00:12:09.423 "allow_any_host": true, 00:12:09.423 "hosts": [] 00:12:09.423 }, 00:12:09.423 { 00:12:09.423 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:09.423 "subtype": "NVMe", 00:12:09.423 "listen_addresses": [ 00:12:09.423 { 00:12:09.423 "trtype": "VFIOUSER", 00:12:09.423 "adrfam": "IPv4", 00:12:09.423 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:09.423 "trsvcid": "0" 00:12:09.423 } 00:12:09.423 ], 00:12:09.423 "allow_any_host": true, 00:12:09.423 "hosts": [], 00:12:09.423 "serial_number": "SPDK1", 00:12:09.423 "model_number": "SPDK bdev Controller", 00:12:09.423 "max_namespaces": 32, 00:12:09.423 "min_cntlid": 1, 00:12:09.423 "max_cntlid": 65519, 00:12:09.423 "namespaces": [ 00:12:09.423 { 00:12:09.423 "nsid": 1, 00:12:09.423 "bdev_name": "Malloc1", 00:12:09.423 "name": "Malloc1", 00:12:09.423 "nguid": "B128144F450A420F9C5A713BF15ED996", 00:12:09.423 "uuid": "b128144f-450a-420f-9c5a-713bf15ed996" 00:12:09.423 } 00:12:09.423 ] 00:12:09.423 }, 00:12:09.423 { 00:12:09.423 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:09.423 "subtype": "NVMe", 00:12:09.423 "listen_addresses": [ 00:12:09.423 { 00:12:09.423 "trtype": "VFIOUSER", 00:12:09.423 "adrfam": "IPv4", 00:12:09.423 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:09.423 "trsvcid": "0" 00:12:09.423 } 00:12:09.423 ], 00:12:09.423 "allow_any_host": true, 00:12:09.423 "hosts": [], 00:12:09.423 "serial_number": "SPDK2", 00:12:09.423 "model_number": "SPDK bdev Controller", 00:12:09.423 "max_namespaces": 32, 00:12:09.423 "min_cntlid": 1, 00:12:09.423 "max_cntlid": 65519, 00:12:09.423 "namespaces": [ 00:12:09.423 { 00:12:09.423 "nsid": 1, 00:12:09.423 "bdev_name": "Malloc2", 00:12:09.423 "name": "Malloc2", 00:12:09.423 "nguid": "DA4FD02D94984082BD0C14ED1ECBEC43", 00:12:09.423 "uuid": "da4fd02d-9498-4082-bd0c-14ed1ecbec43" 00:12:09.423 } 00:12:09.423 ] 00:12:09.423 } 00:12:09.423 ] 00:12:09.423 07:47:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:09.423 07:47:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3178437 00:12:09.423 07:47:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:09.423 07:47:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:09.424 07:47:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:09.424 07:47:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:09.424 07:47:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:09.424 07:47:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:09.424 07:47:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:09.424 07:47:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:09.424 EAL: No free 2048 kB hugepages reported on node 1 00:12:09.683 [2024-07-15 07:47:54.230629] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:09.683 Malloc3 00:12:09.683 07:47:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:09.944 [2024-07-15 07:47:54.455383] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:09.944 07:47:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:09.944 Asynchronous Event Request test 00:12:09.944 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:09.944 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:09.944 Registering asynchronous event callbacks... 00:12:09.944 Starting namespace attribute notice tests for all controllers... 00:12:09.944 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:09.944 aer_cb - Changed Namespace 00:12:09.944 Cleaning up... 00:12:09.944 [ 00:12:09.944 { 00:12:09.944 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:09.944 "subtype": "Discovery", 00:12:09.944 "listen_addresses": [], 00:12:09.944 "allow_any_host": true, 00:12:09.944 "hosts": [] 00:12:09.944 }, 00:12:09.944 { 00:12:09.944 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:09.944 "subtype": "NVMe", 00:12:09.944 "listen_addresses": [ 00:12:09.944 { 00:12:09.944 "trtype": "VFIOUSER", 00:12:09.944 "adrfam": "IPv4", 00:12:09.944 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:09.944 "trsvcid": "0" 00:12:09.944 } 00:12:09.944 ], 00:12:09.944 "allow_any_host": true, 00:12:09.944 "hosts": [], 00:12:09.944 "serial_number": "SPDK1", 00:12:09.944 "model_number": "SPDK bdev Controller", 00:12:09.944 "max_namespaces": 32, 00:12:09.944 "min_cntlid": 1, 00:12:09.944 "max_cntlid": 65519, 00:12:09.944 "namespaces": [ 00:12:09.944 { 00:12:09.944 "nsid": 1, 00:12:09.944 "bdev_name": "Malloc1", 00:12:09.944 "name": "Malloc1", 00:12:09.944 "nguid": "B128144F450A420F9C5A713BF15ED996", 00:12:09.944 "uuid": "b128144f-450a-420f-9c5a-713bf15ed996" 00:12:09.944 }, 00:12:09.944 { 00:12:09.944 "nsid": 2, 00:12:09.944 "bdev_name": "Malloc3", 00:12:09.944 "name": "Malloc3", 00:12:09.944 "nguid": "34D63E5D070E4537A8F4604A1E3A07E7", 00:12:09.944 "uuid": "34d63e5d-070e-4537-a8f4-604a1e3a07e7" 00:12:09.944 } 00:12:09.944 ] 00:12:09.944 }, 00:12:09.944 { 00:12:09.944 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:09.944 "subtype": "NVMe", 00:12:09.944 "listen_addresses": [ 00:12:09.944 { 00:12:09.944 "trtype": "VFIOUSER", 00:12:09.944 "adrfam": "IPv4", 00:12:09.944 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:09.944 "trsvcid": "0" 00:12:09.944 } 00:12:09.944 ], 00:12:09.944 "allow_any_host": true, 00:12:09.944 "hosts": [], 00:12:09.944 "serial_number": "SPDK2", 00:12:09.944 "model_number": "SPDK bdev Controller", 00:12:09.944 "max_namespaces": 32, 00:12:09.944 "min_cntlid": 1, 00:12:09.944 "max_cntlid": 65519, 00:12:09.944 "namespaces": [ 00:12:09.944 { 00:12:09.944 "nsid": 1, 00:12:09.944 "bdev_name": "Malloc2", 00:12:09.944 "name": "Malloc2", 00:12:09.944 "nguid": "DA4FD02D94984082BD0C14ED1ECBEC43", 00:12:09.944 "uuid": "da4fd02d-9498-4082-bd0c-14ed1ecbec43" 00:12:09.944 } 00:12:09.944 ] 00:12:09.944 } 00:12:09.944 ] 00:12:09.944 07:47:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3178437 00:12:09.944 07:47:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:09.944 07:47:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:09.944 07:47:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:09.944 07:47:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:09.944 [2024-07-15 07:47:54.685504] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:12:09.944 [2024-07-15 07:47:54.685534] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3178658 ] 00:12:09.944 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.206 [2024-07-15 07:47:54.715626] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:10.206 [2024-07-15 07:47:54.719236] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:10.206 [2024-07-15 07:47:54.719257] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe24d838000 00:12:10.206 [2024-07-15 07:47:54.720237] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:10.206 [2024-07-15 07:47:54.721241] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:10.206 [2024-07-15 07:47:54.722245] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:10.206 [2024-07-15 07:47:54.723254] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:10.206 [2024-07-15 07:47:54.724257] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:10.206 [2024-07-15 07:47:54.725267] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:10.206 [2024-07-15 07:47:54.726270] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:10.206 [2024-07-15 07:47:54.727277] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:10.206 [2024-07-15 07:47:54.728291] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:10.206 [2024-07-15 07:47:54.728300] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe24d82d000 00:12:10.206 [2024-07-15 07:47:54.729239] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:10.206 [2024-07-15 07:47:54.737769] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:10.206 [2024-07-15 07:47:54.737792] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:10.206 [2024-07-15 07:47:54.742873] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:10.206 [2024-07-15 07:47:54.742911] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:10.206 [2024-07-15 07:47:54.742975] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:10.206 [2024-07-15 07:47:54.742990] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:10.206 [2024-07-15 07:47:54.742995] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:10.206 [2024-07-15 07:47:54.743882] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:10.206 [2024-07-15 07:47:54.743892] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:10.206 [2024-07-15 07:47:54.743898] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:10.206 [2024-07-15 07:47:54.744891] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:10.206 [2024-07-15 07:47:54.744900] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:10.206 [2024-07-15 07:47:54.744907] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:10.206 [2024-07-15 07:47:54.745900] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:10.206 [2024-07-15 07:47:54.745909] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:10.206 [2024-07-15 07:47:54.746908] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:10.206 [2024-07-15 07:47:54.746917] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:10.206 [2024-07-15 07:47:54.746922] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:10.206 [2024-07-15 07:47:54.746927] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:10.206 [2024-07-15 07:47:54.747032] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:10.206 [2024-07-15 07:47:54.747036] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:10.206 [2024-07-15 07:47:54.747041] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:10.206 [2024-07-15 07:47:54.747917] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:10.206 [2024-07-15 07:47:54.748924] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:10.206 [2024-07-15 07:47:54.749928] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:10.206 [2024-07-15 07:47:54.750932] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:10.206 [2024-07-15 07:47:54.750970] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:10.206 [2024-07-15 07:47:54.751942] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:10.206 [2024-07-15 07:47:54.751951] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:10.206 [2024-07-15 07:47:54.751955] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:10.206 [2024-07-15 07:47:54.751972] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:10.206 [2024-07-15 07:47:54.751982] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:10.206 [2024-07-15 07:47:54.751992] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:10.206 [2024-07-15 07:47:54.751997] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:10.206 [2024-07-15 07:47:54.752007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:10.206 [2024-07-15 07:47:54.759233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:10.206 [2024-07-15 07:47:54.759245] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:10.206 [2024-07-15 07:47:54.759251] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:10.206 [2024-07-15 07:47:54.759255] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:10.206 [2024-07-15 07:47:54.759260] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:10.206 [2024-07-15 07:47:54.759265] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:10.206 [2024-07-15 07:47:54.759269] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:10.206 [2024-07-15 07:47:54.759273] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:10.206 [2024-07-15 07:47:54.759280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:10.206 [2024-07-15 07:47:54.759289] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:10.206 [2024-07-15 07:47:54.767231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:10.206 [2024-07-15 07:47:54.767245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.206 [2024-07-15 07:47:54.767253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.206 [2024-07-15 07:47:54.767260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.206 [2024-07-15 07:47:54.767267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.206 [2024-07-15 07:47:54.767271] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:10.206 [2024-07-15 07:47:54.767278] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:10.206 [2024-07-15 07:47:54.767289] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:10.206 [2024-07-15 07:47:54.775230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:10.206 [2024-07-15 07:47:54.775237] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:10.206 [2024-07-15 07:47:54.775242] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:10.206 [2024-07-15 07:47:54.775248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:10.206 [2024-07-15 07:47:54.775253] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:10.206 [2024-07-15 07:47:54.775261] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:10.206 [2024-07-15 07:47:54.783230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:10.206 [2024-07-15 07:47:54.783283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:10.206 [2024-07-15 07:47:54.783290] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:10.206 [2024-07-15 07:47:54.783297] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:10.206 [2024-07-15 07:47:54.783301] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:10.207 [2024-07-15 07:47:54.783307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:10.207 [2024-07-15 07:47:54.791232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:10.207 [2024-07-15 07:47:54.791244] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:10.207 [2024-07-15 07:47:54.791253] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:10.207 [2024-07-15 07:47:54.791259] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:10.207 [2024-07-15 07:47:54.791266] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:10.207 [2024-07-15 07:47:54.791270] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:10.207 [2024-07-15 07:47:54.791275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:10.207 [2024-07-15 07:47:54.799231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:10.207 [2024-07-15 07:47:54.799248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:10.207 [2024-07-15 07:47:54.799256] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:10.207 [2024-07-15 07:47:54.799263] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:10.207 [2024-07-15 07:47:54.799267] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:10.207 [2024-07-15 07:47:54.799275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:10.207 [2024-07-15 07:47:54.807232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:10.207 [2024-07-15 07:47:54.807243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:10.207 [2024-07-15 07:47:54.807249] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:10.207 [2024-07-15 07:47:54.807258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:10.207 [2024-07-15 07:47:54.807263] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:12:10.207 [2024-07-15 07:47:54.807268] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:10.207 [2024-07-15 07:47:54.807272] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:10.207 [2024-07-15 07:47:54.807277] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:10.207 [2024-07-15 07:47:54.807280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:10.207 [2024-07-15 07:47:54.807285] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:10.207 [2024-07-15 07:47:54.807300] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:10.207 [2024-07-15 07:47:54.815232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:10.207 [2024-07-15 07:47:54.815245] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:10.207 [2024-07-15 07:47:54.823230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:10.207 [2024-07-15 07:47:54.823241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:10.207 [2024-07-15 07:47:54.831233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:10.207 [2024-07-15 07:47:54.831245] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:10.207 [2024-07-15 07:47:54.839232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:10.207 [2024-07-15 07:47:54.839246] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:10.207 [2024-07-15 07:47:54.839251] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:10.207 [2024-07-15 07:47:54.839254] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:10.207 [2024-07-15 07:47:54.839257] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:10.207 [2024-07-15 07:47:54.839263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:10.207 [2024-07-15 07:47:54.839270] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:10.207 [2024-07-15 07:47:54.839273] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:10.207 [2024-07-15 07:47:54.839281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:10.207 [2024-07-15 07:47:54.839287] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:10.207 [2024-07-15 07:47:54.839291] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:10.207 [2024-07-15 07:47:54.839296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:10.207 [2024-07-15 07:47:54.839303] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:10.207 [2024-07-15 07:47:54.839307] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:10.207 [2024-07-15 07:47:54.839312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:10.207 [2024-07-15 07:47:54.847231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:10.207 [2024-07-15 07:47:54.847244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:10.207 [2024-07-15 07:47:54.847253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:10.207 [2024-07-15 07:47:54.847259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:10.207 ===================================================== 00:12:10.207 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:10.207 ===================================================== 00:12:10.207 Controller Capabilities/Features 00:12:10.207 ================================ 00:12:10.207 Vendor ID: 4e58 00:12:10.207 Subsystem Vendor ID: 4e58 00:12:10.207 Serial Number: SPDK2 00:12:10.207 Model Number: SPDK bdev Controller 00:12:10.207 Firmware Version: 24.09 00:12:10.207 Recommended Arb Burst: 6 00:12:10.207 IEEE OUI Identifier: 8d 6b 50 00:12:10.207 Multi-path I/O 00:12:10.207 May have multiple subsystem ports: Yes 00:12:10.207 May have multiple controllers: Yes 00:12:10.207 Associated with SR-IOV VF: No 00:12:10.207 Max Data Transfer Size: 131072 00:12:10.207 Max Number of Namespaces: 32 00:12:10.207 Max Number of I/O Queues: 127 00:12:10.207 NVMe Specification Version (VS): 1.3 00:12:10.207 NVMe Specification Version (Identify): 1.3 00:12:10.207 Maximum Queue Entries: 256 00:12:10.207 Contiguous Queues Required: Yes 00:12:10.207 Arbitration Mechanisms Supported 00:12:10.207 Weighted Round Robin: Not Supported 00:12:10.207 Vendor Specific: Not Supported 00:12:10.207 Reset Timeout: 15000 ms 00:12:10.207 Doorbell Stride: 4 bytes 00:12:10.207 NVM Subsystem Reset: Not Supported 00:12:10.207 Command Sets Supported 00:12:10.207 NVM Command Set: Supported 00:12:10.207 Boot Partition: Not Supported 00:12:10.207 Memory Page Size Minimum: 4096 bytes 00:12:10.207 Memory Page Size Maximum: 4096 bytes 00:12:10.207 Persistent Memory Region: Not Supported 00:12:10.207 Optional Asynchronous Events Supported 00:12:10.207 Namespace Attribute Notices: Supported 00:12:10.207 Firmware Activation Notices: Not Supported 00:12:10.207 ANA Change Notices: Not Supported 00:12:10.207 PLE Aggregate Log Change Notices: Not Supported 00:12:10.207 LBA Status Info Alert Notices: Not Supported 00:12:10.207 EGE Aggregate Log Change Notices: Not Supported 00:12:10.207 Normal NVM Subsystem Shutdown event: Not Supported 00:12:10.207 Zone Descriptor Change Notices: Not Supported 00:12:10.207 Discovery Log Change Notices: Not Supported 00:12:10.207 Controller Attributes 00:12:10.207 128-bit Host Identifier: Supported 00:12:10.207 Non-Operational Permissive Mode: Not Supported 00:12:10.207 NVM Sets: Not Supported 00:12:10.207 Read Recovery Levels: Not Supported 00:12:10.207 Endurance Groups: Not Supported 00:12:10.207 Predictable Latency Mode: Not Supported 00:12:10.207 Traffic Based Keep ALive: Not Supported 00:12:10.207 Namespace Granularity: Not Supported 00:12:10.207 SQ Associations: Not Supported 00:12:10.207 UUID List: Not Supported 00:12:10.207 Multi-Domain Subsystem: Not Supported 00:12:10.207 Fixed Capacity Management: Not Supported 00:12:10.207 Variable Capacity Management: Not Supported 00:12:10.207 Delete Endurance Group: Not Supported 00:12:10.207 Delete NVM Set: Not Supported 00:12:10.207 Extended LBA Formats Supported: Not Supported 00:12:10.207 Flexible Data Placement Supported: Not Supported 00:12:10.207 00:12:10.207 Controller Memory Buffer Support 00:12:10.207 ================================ 00:12:10.207 Supported: No 00:12:10.207 00:12:10.207 Persistent Memory Region Support 00:12:10.207 ================================ 00:12:10.207 Supported: No 00:12:10.207 00:12:10.207 Admin Command Set Attributes 00:12:10.207 ============================ 00:12:10.207 Security Send/Receive: Not Supported 00:12:10.207 Format NVM: Not Supported 00:12:10.207 Firmware Activate/Download: Not Supported 00:12:10.207 Namespace Management: Not Supported 00:12:10.207 Device Self-Test: Not Supported 00:12:10.207 Directives: Not Supported 00:12:10.207 NVMe-MI: Not Supported 00:12:10.207 Virtualization Management: Not Supported 00:12:10.207 Doorbell Buffer Config: Not Supported 00:12:10.207 Get LBA Status Capability: Not Supported 00:12:10.207 Command & Feature Lockdown Capability: Not Supported 00:12:10.207 Abort Command Limit: 4 00:12:10.207 Async Event Request Limit: 4 00:12:10.207 Number of Firmware Slots: N/A 00:12:10.208 Firmware Slot 1 Read-Only: N/A 00:12:10.208 Firmware Activation Without Reset: N/A 00:12:10.208 Multiple Update Detection Support: N/A 00:12:10.208 Firmware Update Granularity: No Information Provided 00:12:10.208 Per-Namespace SMART Log: No 00:12:10.208 Asymmetric Namespace Access Log Page: Not Supported 00:12:10.208 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:10.208 Command Effects Log Page: Supported 00:12:10.208 Get Log Page Extended Data: Supported 00:12:10.208 Telemetry Log Pages: Not Supported 00:12:10.208 Persistent Event Log Pages: Not Supported 00:12:10.208 Supported Log Pages Log Page: May Support 00:12:10.208 Commands Supported & Effects Log Page: Not Supported 00:12:10.208 Feature Identifiers & Effects Log Page:May Support 00:12:10.208 NVMe-MI Commands & Effects Log Page: May Support 00:12:10.208 Data Area 4 for Telemetry Log: Not Supported 00:12:10.208 Error Log Page Entries Supported: 128 00:12:10.208 Keep Alive: Supported 00:12:10.208 Keep Alive Granularity: 10000 ms 00:12:10.208 00:12:10.208 NVM Command Set Attributes 00:12:10.208 ========================== 00:12:10.208 Submission Queue Entry Size 00:12:10.208 Max: 64 00:12:10.208 Min: 64 00:12:10.208 Completion Queue Entry Size 00:12:10.208 Max: 16 00:12:10.208 Min: 16 00:12:10.208 Number of Namespaces: 32 00:12:10.208 Compare Command: Supported 00:12:10.208 Write Uncorrectable Command: Not Supported 00:12:10.208 Dataset Management Command: Supported 00:12:10.208 Write Zeroes Command: Supported 00:12:10.208 Set Features Save Field: Not Supported 00:12:10.208 Reservations: Not Supported 00:12:10.208 Timestamp: Not Supported 00:12:10.208 Copy: Supported 00:12:10.208 Volatile Write Cache: Present 00:12:10.208 Atomic Write Unit (Normal): 1 00:12:10.208 Atomic Write Unit (PFail): 1 00:12:10.208 Atomic Compare & Write Unit: 1 00:12:10.208 Fused Compare & Write: Supported 00:12:10.208 Scatter-Gather List 00:12:10.208 SGL Command Set: Supported (Dword aligned) 00:12:10.208 SGL Keyed: Not Supported 00:12:10.208 SGL Bit Bucket Descriptor: Not Supported 00:12:10.208 SGL Metadata Pointer: Not Supported 00:12:10.208 Oversized SGL: Not Supported 00:12:10.208 SGL Metadata Address: Not Supported 00:12:10.208 SGL Offset: Not Supported 00:12:10.208 Transport SGL Data Block: Not Supported 00:12:10.208 Replay Protected Memory Block: Not Supported 00:12:10.208 00:12:10.208 Firmware Slot Information 00:12:10.208 ========================= 00:12:10.208 Active slot: 1 00:12:10.208 Slot 1 Firmware Revision: 24.09 00:12:10.208 00:12:10.208 00:12:10.208 Commands Supported and Effects 00:12:10.208 ============================== 00:12:10.208 Admin Commands 00:12:10.208 -------------- 00:12:10.208 Get Log Page (02h): Supported 00:12:10.208 Identify (06h): Supported 00:12:10.208 Abort (08h): Supported 00:12:10.208 Set Features (09h): Supported 00:12:10.208 Get Features (0Ah): Supported 00:12:10.208 Asynchronous Event Request (0Ch): Supported 00:12:10.208 Keep Alive (18h): Supported 00:12:10.208 I/O Commands 00:12:10.208 ------------ 00:12:10.208 Flush (00h): Supported LBA-Change 00:12:10.208 Write (01h): Supported LBA-Change 00:12:10.208 Read (02h): Supported 00:12:10.208 Compare (05h): Supported 00:12:10.208 Write Zeroes (08h): Supported LBA-Change 00:12:10.208 Dataset Management (09h): Supported LBA-Change 00:12:10.208 Copy (19h): Supported LBA-Change 00:12:10.208 00:12:10.208 Error Log 00:12:10.208 ========= 00:12:10.208 00:12:10.208 Arbitration 00:12:10.208 =========== 00:12:10.208 Arbitration Burst: 1 00:12:10.208 00:12:10.208 Power Management 00:12:10.208 ================ 00:12:10.208 Number of Power States: 1 00:12:10.208 Current Power State: Power State #0 00:12:10.208 Power State #0: 00:12:10.208 Max Power: 0.00 W 00:12:10.208 Non-Operational State: Operational 00:12:10.208 Entry Latency: Not Reported 00:12:10.208 Exit Latency: Not Reported 00:12:10.208 Relative Read Throughput: 0 00:12:10.208 Relative Read Latency: 0 00:12:10.208 Relative Write Throughput: 0 00:12:10.208 Relative Write Latency: 0 00:12:10.208 Idle Power: Not Reported 00:12:10.208 Active Power: Not Reported 00:12:10.208 Non-Operational Permissive Mode: Not Supported 00:12:10.208 00:12:10.208 Health Information 00:12:10.208 ================== 00:12:10.208 Critical Warnings: 00:12:10.208 Available Spare Space: OK 00:12:10.208 Temperature: OK 00:12:10.208 Device Reliability: OK 00:12:10.208 Read Only: No 00:12:10.208 Volatile Memory Backup: OK 00:12:10.208 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:10.208 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:10.208 Available Spare: 0% 00:12:10.208 Available Sp[2024-07-15 07:47:54.847351] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:10.208 [2024-07-15 07:47:54.855232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:10.208 [2024-07-15 07:47:54.855264] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:10.208 [2024-07-15 07:47:54.855273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.208 [2024-07-15 07:47:54.855278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.208 [2024-07-15 07:47:54.855284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.208 [2024-07-15 07:47:54.855289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.208 [2024-07-15 07:47:54.855329] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:10.208 [2024-07-15 07:47:54.855339] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:10.208 [2024-07-15 07:47:54.856332] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:10.208 [2024-07-15 07:47:54.856374] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:10.208 [2024-07-15 07:47:54.856380] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:10.208 [2024-07-15 07:47:54.857331] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:10.208 [2024-07-15 07:47:54.857342] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:10.208 [2024-07-15 07:47:54.857387] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:10.208 [2024-07-15 07:47:54.858369] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:10.208 are Threshold: 0% 00:12:10.208 Life Percentage Used: 0% 00:12:10.208 Data Units Read: 0 00:12:10.208 Data Units Written: 0 00:12:10.208 Host Read Commands: 0 00:12:10.208 Host Write Commands: 0 00:12:10.208 Controller Busy Time: 0 minutes 00:12:10.208 Power Cycles: 0 00:12:10.208 Power On Hours: 0 hours 00:12:10.208 Unsafe Shutdowns: 0 00:12:10.208 Unrecoverable Media Errors: 0 00:12:10.208 Lifetime Error Log Entries: 0 00:12:10.208 Warning Temperature Time: 0 minutes 00:12:10.208 Critical Temperature Time: 0 minutes 00:12:10.208 00:12:10.208 Number of Queues 00:12:10.208 ================ 00:12:10.208 Number of I/O Submission Queues: 127 00:12:10.208 Number of I/O Completion Queues: 127 00:12:10.208 00:12:10.208 Active Namespaces 00:12:10.208 ================= 00:12:10.208 Namespace ID:1 00:12:10.208 Error Recovery Timeout: Unlimited 00:12:10.208 Command Set Identifier: NVM (00h) 00:12:10.208 Deallocate: Supported 00:12:10.208 Deallocated/Unwritten Error: Not Supported 00:12:10.208 Deallocated Read Value: Unknown 00:12:10.208 Deallocate in Write Zeroes: Not Supported 00:12:10.208 Deallocated Guard Field: 0xFFFF 00:12:10.208 Flush: Supported 00:12:10.208 Reservation: Supported 00:12:10.208 Namespace Sharing Capabilities: Multiple Controllers 00:12:10.208 Size (in LBAs): 131072 (0GiB) 00:12:10.208 Capacity (in LBAs): 131072 (0GiB) 00:12:10.208 Utilization (in LBAs): 131072 (0GiB) 00:12:10.208 NGUID: DA4FD02D94984082BD0C14ED1ECBEC43 00:12:10.208 UUID: da4fd02d-9498-4082-bd0c-14ed1ecbec43 00:12:10.208 Thin Provisioning: Not Supported 00:12:10.208 Per-NS Atomic Units: Yes 00:12:10.208 Atomic Boundary Size (Normal): 0 00:12:10.208 Atomic Boundary Size (PFail): 0 00:12:10.208 Atomic Boundary Offset: 0 00:12:10.208 Maximum Single Source Range Length: 65535 00:12:10.208 Maximum Copy Length: 65535 00:12:10.208 Maximum Source Range Count: 1 00:12:10.208 NGUID/EUI64 Never Reused: No 00:12:10.208 Namespace Write Protected: No 00:12:10.208 Number of LBA Formats: 1 00:12:10.208 Current LBA Format: LBA Format #00 00:12:10.208 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:10.208 00:12:10.208 07:47:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:10.208 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.468 [2024-07-15 07:47:55.072590] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:15.740 Initializing NVMe Controllers 00:12:15.740 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:15.740 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:15.740 Initialization complete. Launching workers. 00:12:15.740 ======================================================== 00:12:15.740 Latency(us) 00:12:15.740 Device Information : IOPS MiB/s Average min max 00:12:15.740 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39872.05 155.75 3210.09 961.03 10619.30 00:12:15.741 ======================================================== 00:12:15.741 Total : 39872.05 155.75 3210.09 961.03 10619.30 00:12:15.741 00:12:15.741 [2024-07-15 07:48:00.181477] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:15.741 07:48:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:15.741 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.741 [2024-07-15 07:48:00.396117] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:21.011 Initializing NVMe Controllers 00:12:21.011 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:21.011 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:21.011 Initialization complete. Launching workers. 00:12:21.011 ======================================================== 00:12:21.011 Latency(us) 00:12:21.011 Device Information : IOPS MiB/s Average min max 00:12:21.011 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39938.04 156.01 3204.78 968.45 7168.17 00:12:21.011 ======================================================== 00:12:21.011 Total : 39938.04 156.01 3204.78 968.45 7168.17 00:12:21.011 00:12:21.011 [2024-07-15 07:48:05.416366] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:21.011 07:48:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:21.011 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.011 [2024-07-15 07:48:05.615808] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:26.284 [2024-07-15 07:48:10.753322] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:26.284 Initializing NVMe Controllers 00:12:26.284 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:26.284 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:26.284 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:26.284 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:26.284 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:26.284 Initialization complete. Launching workers. 00:12:26.284 Starting thread on core 2 00:12:26.284 Starting thread on core 3 00:12:26.284 Starting thread on core 1 00:12:26.284 07:48:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:26.284 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.284 [2024-07-15 07:48:11.029589] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:29.573 [2024-07-15 07:48:14.085246] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:29.573 Initializing NVMe Controllers 00:12:29.573 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:29.573 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:29.573 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:29.573 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:29.573 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:29.573 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:29.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:29.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:29.573 Initialization complete. Launching workers. 00:12:29.573 Starting thread on core 1 with urgent priority queue 00:12:29.573 Starting thread on core 2 with urgent priority queue 00:12:29.573 Starting thread on core 3 with urgent priority queue 00:12:29.573 Starting thread on core 0 with urgent priority queue 00:12:29.573 SPDK bdev Controller (SPDK2 ) core 0: 10353.67 IO/s 9.66 secs/100000 ios 00:12:29.573 SPDK bdev Controller (SPDK2 ) core 1: 9242.00 IO/s 10.82 secs/100000 ios 00:12:29.573 SPDK bdev Controller (SPDK2 ) core 2: 7748.00 IO/s 12.91 secs/100000 ios 00:12:29.573 SPDK bdev Controller (SPDK2 ) core 3: 10805.33 IO/s 9.25 secs/100000 ios 00:12:29.573 ======================================================== 00:12:29.573 00:12:29.573 07:48:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:29.573 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.831 [2024-07-15 07:48:14.346341] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:29.831 Initializing NVMe Controllers 00:12:29.831 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:29.831 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:29.831 Namespace ID: 1 size: 0GB 00:12:29.831 Initialization complete. 00:12:29.831 INFO: using host memory buffer for IO 00:12:29.831 Hello world! 00:12:29.831 [2024-07-15 07:48:14.358423] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:29.831 07:48:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:29.831 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.089 [2024-07-15 07:48:14.625507] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:31.039 Initializing NVMe Controllers 00:12:31.039 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:31.039 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:31.039 Initialization complete. Launching workers. 00:12:31.039 submit (in ns) avg, min, max = 6931.1, 3193.9, 4001430.4 00:12:31.039 complete (in ns) avg, min, max = 21207.0, 1765.2, 3999942.6 00:12:31.039 00:12:31.039 Submit histogram 00:12:31.039 ================ 00:12:31.039 Range in us Cumulative Count 00:12:31.039 3.186 - 3.200: 0.0124% ( 2) 00:12:31.039 3.200 - 3.214: 0.0186% ( 1) 00:12:31.039 3.214 - 3.228: 0.0558% ( 6) 00:12:31.039 3.228 - 3.242: 0.1178% ( 10) 00:12:31.039 3.242 - 3.256: 0.1673% ( 8) 00:12:31.039 3.256 - 3.270: 0.2789% ( 18) 00:12:31.039 3.270 - 3.283: 0.8305% ( 89) 00:12:31.039 3.283 - 3.297: 3.4585% ( 424) 00:12:31.039 3.297 - 3.311: 6.9729% ( 567) 00:12:31.039 3.311 - 3.325: 10.6917% ( 600) 00:12:31.039 3.325 - 3.339: 14.8072% ( 664) 00:12:31.039 3.339 - 3.353: 19.7967% ( 805) 00:12:31.039 3.353 - 3.367: 24.8977% ( 823) 00:12:31.039 3.367 - 3.381: 30.4202% ( 891) 00:12:31.039 3.381 - 3.395: 35.9737% ( 896) 00:12:31.039 3.395 - 3.409: 41.0686% ( 822) 00:12:31.039 3.409 - 3.423: 45.6737% ( 743) 00:12:31.039 3.423 - 3.437: 50.5144% ( 781) 00:12:31.039 3.437 - 3.450: 56.7993% ( 1014) 00:12:31.039 3.450 - 3.464: 61.2123% ( 712) 00:12:31.039 3.464 - 3.478: 65.3031% ( 660) 00:12:31.039 3.478 - 3.492: 70.6396% ( 861) 00:12:31.039 3.492 - 3.506: 75.6291% ( 805) 00:12:31.039 3.506 - 3.520: 79.0690% ( 555) 00:12:31.039 3.520 - 3.534: 81.4925% ( 391) 00:12:31.039 3.534 - 3.548: 84.0709% ( 416) 00:12:31.039 3.548 - 3.562: 85.7444% ( 270) 00:12:31.039 3.562 - 3.590: 87.6224% ( 303) 00:12:31.039 3.590 - 3.617: 88.8744% ( 202) 00:12:31.039 3.617 - 3.645: 90.3062% ( 231) 00:12:31.039 3.645 - 3.673: 91.8557% ( 250) 00:12:31.039 3.673 - 3.701: 93.6346% ( 287) 00:12:31.039 3.701 - 3.729: 95.1717% ( 248) 00:12:31.039 3.729 - 3.757: 96.7522% ( 255) 00:12:31.039 3.757 - 3.784: 97.8245% ( 173) 00:12:31.039 3.784 - 3.812: 98.6116% ( 127) 00:12:31.039 3.812 - 3.840: 99.0951% ( 78) 00:12:31.039 3.840 - 3.868: 99.3554% ( 42) 00:12:31.039 3.868 - 3.896: 99.5227% ( 27) 00:12:31.039 3.896 - 3.923: 99.5599% ( 6) 00:12:31.039 3.923 - 3.951: 99.6095% ( 8) 00:12:31.039 3.979 - 4.007: 99.6219% ( 2) 00:12:31.039 4.007 - 4.035: 99.6281% ( 1) 00:12:31.039 4.035 - 4.063: 99.6343% ( 1) 00:12:31.039 4.146 - 4.174: 99.6405% ( 1) 00:12:31.039 5.148 - 5.176: 99.6467% ( 1) 00:12:31.039 5.231 - 5.259: 99.6529% ( 1) 00:12:31.039 5.398 - 5.426: 99.6591% ( 1) 00:12:31.039 5.482 - 5.510: 99.6653% ( 1) 00:12:31.039 5.537 - 5.565: 99.6715% ( 1) 00:12:31.039 5.899 - 5.927: 99.6777% ( 1) 00:12:31.039 5.955 - 5.983: 99.6839% ( 1) 00:12:31.039 6.066 - 6.094: 99.6901% ( 1) 00:12:31.039 6.205 - 6.233: 99.6963% ( 1) 00:12:31.039 6.289 - 6.317: 99.7025% ( 1) 00:12:31.039 6.317 - 6.344: 99.7087% ( 1) 00:12:31.039 6.456 - 6.483: 99.7149% ( 1) 00:12:31.039 6.539 - 6.567: 99.7211% ( 1) 00:12:31.039 6.706 - 6.734: 99.7273% ( 1) 00:12:31.039 6.817 - 6.845: 99.7335% ( 1) 00:12:31.039 6.929 - 6.957: 99.7397% ( 1) 00:12:31.039 6.957 - 6.984: 99.7459% ( 1) 00:12:31.039 7.012 - 7.040: 99.7521% ( 1) 00:12:31.039 7.040 - 7.068: 99.7583% ( 1) 00:12:31.039 7.123 - 7.179: 99.7645% ( 1) 00:12:31.039 7.402 - 7.457: 99.7707% ( 1) 00:12:31.039 7.457 - 7.513: 99.7831% ( 2) 00:12:31.039 7.903 - 7.958: 99.7893% ( 1) 00:12:31.039 7.958 - 8.014: 99.7955% ( 1) 00:12:31.039 8.070 - 8.125: 99.8141% ( 3) 00:12:31.039 8.181 - 8.237: 99.8203% ( 1) 00:12:31.039 8.403 - 8.459: 99.8265% ( 1) 00:12:31.039 8.459 - 8.515: 99.8388% ( 2) 00:12:31.039 8.515 - 8.570: 99.8512% ( 2) 00:12:31.039 8.682 - 8.737: 99.8574% ( 1) 00:12:31.039 8.849 - 8.904: 99.8698% ( 2) 00:12:31.039 9.183 - 9.238: 99.8760% ( 1) 00:12:31.039 9.238 - 9.294: 99.8884% ( 2) 00:12:31.039 9.294 - 9.350: 99.9008% ( 2) 00:12:31.039 [2024-07-15 07:48:15.720308] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:31.039 10.017 - 10.073: 99.9070% ( 1) 00:12:31.039 10.129 - 10.184: 99.9132% ( 1) 00:12:31.039 3989.148 - 4017.642: 100.0000% ( 14) 00:12:31.039 00:12:31.039 Complete histogram 00:12:31.039 ================== 00:12:31.040 Range in us Cumulative Count 00:12:31.040 1.760 - 1.767: 0.0062% ( 1) 00:12:31.040 1.767 - 1.774: 0.0310% ( 4) 00:12:31.040 1.774 - 1.781: 0.0682% ( 6) 00:12:31.040 1.781 - 1.795: 0.0992% ( 5) 00:12:31.040 1.795 - 1.809: 0.1426% ( 7) 00:12:31.040 1.809 - 1.823: 2.3863% ( 362) 00:12:31.040 1.823 - 1.837: 7.5245% ( 829) 00:12:31.040 1.837 - 1.850: 9.7806% ( 364) 00:12:31.040 1.850 - 1.864: 11.4975% ( 277) 00:12:31.040 1.864 - 1.878: 38.0377% ( 4282) 00:12:31.040 1.878 - 1.892: 82.7755% ( 7218) 00:12:31.040 1.892 - 1.906: 93.1449% ( 1673) 00:12:31.040 1.906 - 1.920: 95.4444% ( 371) 00:12:31.040 1.920 - 1.934: 96.2192% ( 125) 00:12:31.040 1.934 - 1.948: 96.9319% ( 115) 00:12:31.040 1.948 - 1.962: 98.0538% ( 181) 00:12:31.040 1.962 - 1.976: 98.8286% ( 125) 00:12:31.040 1.976 - 1.990: 99.1199% ( 47) 00:12:31.040 1.990 - 2.003: 99.1819% ( 10) 00:12:31.040 2.003 - 2.017: 99.2190% ( 6) 00:12:31.040 2.017 - 2.031: 99.2438% ( 4) 00:12:31.040 2.031 - 2.045: 99.2562% ( 2) 00:12:31.040 2.045 - 2.059: 99.2748% ( 3) 00:12:31.040 2.087 - 2.101: 99.2810% ( 1) 00:12:31.040 2.337 - 2.351: 99.2872% ( 1) 00:12:31.040 2.421 - 2.435: 99.2934% ( 1) 00:12:31.040 3.701 - 3.729: 99.2996% ( 1) 00:12:31.040 4.007 - 4.035: 99.3058% ( 1) 00:12:31.040 4.090 - 4.118: 99.3120% ( 1) 00:12:31.040 4.397 - 4.424: 99.3182% ( 1) 00:12:31.040 4.619 - 4.647: 99.3244% ( 1) 00:12:31.040 4.675 - 4.703: 99.3306% ( 1) 00:12:31.040 4.842 - 4.870: 99.3368% ( 1) 00:12:31.040 5.009 - 5.037: 99.3430% ( 1) 00:12:31.040 5.259 - 5.287: 99.3492% ( 1) 00:12:31.040 5.482 - 5.510: 99.3554% ( 1) 00:12:31.040 5.510 - 5.537: 99.3616% ( 1) 00:12:31.040 5.760 - 5.788: 99.3678% ( 1) 00:12:31.040 6.122 - 6.150: 99.3740% ( 1) 00:12:31.040 6.456 - 6.483: 99.3802% ( 1) 00:12:31.040 6.595 - 6.623: 99.3864% ( 1) 00:12:31.040 6.650 - 6.678: 99.3926% ( 1) 00:12:31.040 6.817 - 6.845: 99.3988% ( 1) 00:12:31.040 6.845 - 6.873: 99.4050% ( 1) 00:12:31.040 6.957 - 6.984: 99.4174% ( 2) 00:12:31.040 7.346 - 7.402: 99.4298% ( 2) 00:12:31.040 7.402 - 7.457: 99.4360% ( 1) 00:12:31.040 7.457 - 7.513: 99.4422% ( 1) 00:12:31.040 7.569 - 7.624: 99.4484% ( 1) 00:12:31.040 7.847 - 7.903: 99.4546% ( 1) 00:12:31.040 7.903 - 7.958: 99.4670% ( 2) 00:12:31.040 8.125 - 8.181: 99.4732% ( 1) 00:12:31.040 8.403 - 8.459: 99.4794% ( 1) 00:12:31.040 8.904 - 8.960: 99.4856% ( 1) 00:12:31.040 9.183 - 9.238: 99.4918% ( 1) 00:12:31.040 12.243 - 12.299: 99.4980% ( 1) 00:12:31.040 12.355 - 12.410: 99.5042% ( 1) 00:12:31.040 17.586 - 17.697: 99.5165% ( 2) 00:12:31.040 3989.148 - 4017.642: 100.0000% ( 78) 00:12:31.040 00:12:31.040 07:48:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:31.040 07:48:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:31.040 07:48:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:31.040 07:48:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:31.040 07:48:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:31.298 [ 00:12:31.298 { 00:12:31.298 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:31.298 "subtype": "Discovery", 00:12:31.298 "listen_addresses": [], 00:12:31.298 "allow_any_host": true, 00:12:31.298 "hosts": [] 00:12:31.298 }, 00:12:31.298 { 00:12:31.298 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:31.298 "subtype": "NVMe", 00:12:31.298 "listen_addresses": [ 00:12:31.298 { 00:12:31.298 "trtype": "VFIOUSER", 00:12:31.298 "adrfam": "IPv4", 00:12:31.298 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:31.298 "trsvcid": "0" 00:12:31.298 } 00:12:31.298 ], 00:12:31.298 "allow_any_host": true, 00:12:31.298 "hosts": [], 00:12:31.298 "serial_number": "SPDK1", 00:12:31.298 "model_number": "SPDK bdev Controller", 00:12:31.298 "max_namespaces": 32, 00:12:31.298 "min_cntlid": 1, 00:12:31.298 "max_cntlid": 65519, 00:12:31.298 "namespaces": [ 00:12:31.298 { 00:12:31.298 "nsid": 1, 00:12:31.298 "bdev_name": "Malloc1", 00:12:31.298 "name": "Malloc1", 00:12:31.298 "nguid": "B128144F450A420F9C5A713BF15ED996", 00:12:31.298 "uuid": "b128144f-450a-420f-9c5a-713bf15ed996" 00:12:31.298 }, 00:12:31.298 { 00:12:31.298 "nsid": 2, 00:12:31.298 "bdev_name": "Malloc3", 00:12:31.298 "name": "Malloc3", 00:12:31.298 "nguid": "34D63E5D070E4537A8F4604A1E3A07E7", 00:12:31.298 "uuid": "34d63e5d-070e-4537-a8f4-604a1e3a07e7" 00:12:31.298 } 00:12:31.298 ] 00:12:31.298 }, 00:12:31.298 { 00:12:31.298 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:31.298 "subtype": "NVMe", 00:12:31.298 "listen_addresses": [ 00:12:31.298 { 00:12:31.298 "trtype": "VFIOUSER", 00:12:31.298 "adrfam": "IPv4", 00:12:31.298 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:31.298 "trsvcid": "0" 00:12:31.298 } 00:12:31.298 ], 00:12:31.298 "allow_any_host": true, 00:12:31.298 "hosts": [], 00:12:31.298 "serial_number": "SPDK2", 00:12:31.298 "model_number": "SPDK bdev Controller", 00:12:31.298 "max_namespaces": 32, 00:12:31.298 "min_cntlid": 1, 00:12:31.298 "max_cntlid": 65519, 00:12:31.298 "namespaces": [ 00:12:31.298 { 00:12:31.298 "nsid": 1, 00:12:31.298 "bdev_name": "Malloc2", 00:12:31.299 "name": "Malloc2", 00:12:31.299 "nguid": "DA4FD02D94984082BD0C14ED1ECBEC43", 00:12:31.299 "uuid": "da4fd02d-9498-4082-bd0c-14ed1ecbec43" 00:12:31.299 } 00:12:31.299 ] 00:12:31.299 } 00:12:31.299 ] 00:12:31.299 07:48:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:31.299 07:48:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3182632 00:12:31.299 07:48:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:31.299 07:48:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:31.299 07:48:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:31.299 07:48:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:31.299 07:48:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:31.299 07:48:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:31.299 07:48:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:31.299 07:48:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:31.299 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.557 [2024-07-15 07:48:16.085620] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:31.557 Malloc4 00:12:31.557 07:48:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:31.814 [2024-07-15 07:48:16.313372] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:31.814 07:48:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:31.814 Asynchronous Event Request test 00:12:31.814 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:31.814 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:31.814 Registering asynchronous event callbacks... 00:12:31.814 Starting namespace attribute notice tests for all controllers... 00:12:31.814 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:31.814 aer_cb - Changed Namespace 00:12:31.814 Cleaning up... 00:12:31.814 [ 00:12:31.814 { 00:12:31.814 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:31.814 "subtype": "Discovery", 00:12:31.814 "listen_addresses": [], 00:12:31.814 "allow_any_host": true, 00:12:31.814 "hosts": [] 00:12:31.814 }, 00:12:31.814 { 00:12:31.814 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:31.814 "subtype": "NVMe", 00:12:31.814 "listen_addresses": [ 00:12:31.814 { 00:12:31.814 "trtype": "VFIOUSER", 00:12:31.814 "adrfam": "IPv4", 00:12:31.814 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:31.814 "trsvcid": "0" 00:12:31.814 } 00:12:31.814 ], 00:12:31.814 "allow_any_host": true, 00:12:31.814 "hosts": [], 00:12:31.814 "serial_number": "SPDK1", 00:12:31.814 "model_number": "SPDK bdev Controller", 00:12:31.814 "max_namespaces": 32, 00:12:31.814 "min_cntlid": 1, 00:12:31.814 "max_cntlid": 65519, 00:12:31.814 "namespaces": [ 00:12:31.814 { 00:12:31.814 "nsid": 1, 00:12:31.814 "bdev_name": "Malloc1", 00:12:31.814 "name": "Malloc1", 00:12:31.815 "nguid": "B128144F450A420F9C5A713BF15ED996", 00:12:31.815 "uuid": "b128144f-450a-420f-9c5a-713bf15ed996" 00:12:31.815 }, 00:12:31.815 { 00:12:31.815 "nsid": 2, 00:12:31.815 "bdev_name": "Malloc3", 00:12:31.815 "name": "Malloc3", 00:12:31.815 "nguid": "34D63E5D070E4537A8F4604A1E3A07E7", 00:12:31.815 "uuid": "34d63e5d-070e-4537-a8f4-604a1e3a07e7" 00:12:31.815 } 00:12:31.815 ] 00:12:31.815 }, 00:12:31.815 { 00:12:31.815 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:31.815 "subtype": "NVMe", 00:12:31.815 "listen_addresses": [ 00:12:31.815 { 00:12:31.815 "trtype": "VFIOUSER", 00:12:31.815 "adrfam": "IPv4", 00:12:31.815 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:31.815 "trsvcid": "0" 00:12:31.815 } 00:12:31.815 ], 00:12:31.815 "allow_any_host": true, 00:12:31.815 "hosts": [], 00:12:31.815 "serial_number": "SPDK2", 00:12:31.815 "model_number": "SPDK bdev Controller", 00:12:31.815 "max_namespaces": 32, 00:12:31.815 "min_cntlid": 1, 00:12:31.815 "max_cntlid": 65519, 00:12:31.815 "namespaces": [ 00:12:31.815 { 00:12:31.815 "nsid": 1, 00:12:31.815 "bdev_name": "Malloc2", 00:12:31.815 "name": "Malloc2", 00:12:31.815 "nguid": "DA4FD02D94984082BD0C14ED1ECBEC43", 00:12:31.815 "uuid": "da4fd02d-9498-4082-bd0c-14ed1ecbec43" 00:12:31.815 }, 00:12:31.815 { 00:12:31.815 "nsid": 2, 00:12:31.815 "bdev_name": "Malloc4", 00:12:31.815 "name": "Malloc4", 00:12:31.815 "nguid": "A0EFD9BE183A422B9F0B91F3432DBDC2", 00:12:31.815 "uuid": "a0efd9be-183a-422b-9f0b-91f3432dbdc2" 00:12:31.815 } 00:12:31.815 ] 00:12:31.815 } 00:12:31.815 ] 00:12:31.815 07:48:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3182632 00:12:31.815 07:48:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:31.815 07:48:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3174488 00:12:31.815 07:48:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3174488 ']' 00:12:31.815 07:48:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3174488 00:12:31.815 07:48:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:31.815 07:48:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:31.815 07:48:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3174488 00:12:32.073 07:48:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:32.073 07:48:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:32.073 07:48:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3174488' 00:12:32.073 killing process with pid 3174488 00:12:32.073 07:48:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3174488 00:12:32.073 07:48:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3174488 00:12:32.333 07:48:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:32.333 07:48:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:32.333 07:48:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:32.333 07:48:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:32.333 07:48:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:32.333 07:48:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3182867 00:12:32.333 07:48:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3182867' 00:12:32.333 Process pid: 3182867 00:12:32.333 07:48:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:32.333 07:48:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:32.333 07:48:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3182867 00:12:32.333 07:48:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3182867 ']' 00:12:32.333 07:48:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.333 07:48:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:32.333 07:48:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.333 07:48:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:32.333 07:48:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:32.333 [2024-07-15 07:48:16.887332] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:32.333 [2024-07-15 07:48:16.888190] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:12:32.333 [2024-07-15 07:48:16.888234] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.333 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.333 [2024-07-15 07:48:16.954645] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.333 [2024-07-15 07:48:17.033580] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.333 [2024-07-15 07:48:17.033630] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.333 [2024-07-15 07:48:17.033637] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.333 [2024-07-15 07:48:17.033643] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.334 [2024-07-15 07:48:17.033649] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.334 [2024-07-15 07:48:17.033893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.334 [2024-07-15 07:48:17.033919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.334 [2024-07-15 07:48:17.034023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.334 [2024-07-15 07:48:17.034025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.593 [2024-07-15 07:48:17.115996] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:32.593 [2024-07-15 07:48:17.116279] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:32.593 [2024-07-15 07:48:17.116415] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:32.593 [2024-07-15 07:48:17.116441] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:32.593 [2024-07-15 07:48:17.116780] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:33.161 07:48:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:33.161 07:48:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:33.161 07:48:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:34.098 07:48:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:34.358 07:48:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:34.358 07:48:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:34.358 07:48:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:34.358 07:48:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:34.358 07:48:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:34.358 Malloc1 00:12:34.358 07:48:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:34.617 07:48:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:34.876 07:48:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:35.136 07:48:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:35.136 07:48:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:35.136 07:48:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:35.136 Malloc2 00:12:35.136 07:48:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:35.395 07:48:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:35.654 07:48:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:35.654 07:48:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:35.654 07:48:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3182867 00:12:35.654 07:48:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3182867 ']' 00:12:35.654 07:48:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3182867 00:12:35.654 07:48:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:35.654 07:48:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:35.654 07:48:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3182867 00:12:35.914 07:48:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:35.914 07:48:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:35.914 07:48:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3182867' 00:12:35.914 killing process with pid 3182867 00:12:35.914 07:48:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3182867 00:12:35.914 07:48:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3182867 00:12:35.914 07:48:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:36.174 07:48:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:36.174 00:12:36.174 real 0m51.294s 00:12:36.174 user 3m22.882s 00:12:36.174 sys 0m3.624s 00:12:36.174 07:48:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:36.174 07:48:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:36.174 ************************************ 00:12:36.174 END TEST nvmf_vfio_user 00:12:36.174 ************************************ 00:12:36.174 07:48:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:36.174 07:48:20 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:36.174 07:48:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:36.174 07:48:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.174 07:48:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:36.174 ************************************ 00:12:36.174 START TEST nvmf_vfio_user_nvme_compliance 00:12:36.174 ************************************ 00:12:36.174 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:36.174 * Looking for test storage... 00:12:36.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:36.174 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.174 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:36.174 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.174 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.174 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.174 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.174 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.174 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.174 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.174 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.174 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.174 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.174 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:36.174 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:36.174 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.174 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3183618 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3183618' 00:12:36.175 Process pid: 3183618 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3183618 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 3183618 ']' 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:36.175 07:48:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:36.175 [2024-07-15 07:48:20.912548] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:12:36.175 [2024-07-15 07:48:20.912599] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.435 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.435 [2024-07-15 07:48:20.977891] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:36.435 [2024-07-15 07:48:21.049102] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.435 [2024-07-15 07:48:21.049142] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.435 [2024-07-15 07:48:21.049149] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.435 [2024-07-15 07:48:21.049155] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.435 [2024-07-15 07:48:21.049160] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.435 [2024-07-15 07:48:21.049286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.435 [2024-07-15 07:48:21.049321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.435 [2024-07-15 07:48:21.049321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.004 07:48:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:37.004 07:48:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:12:37.004 07:48:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:38.384 07:48:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:38.384 07:48:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:38.384 07:48:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:38.384 07:48:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.384 07:48:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:38.384 07:48:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.384 07:48:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:38.384 07:48:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:38.384 07:48:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.384 07:48:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:38.384 malloc0 00:12:38.384 07:48:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.384 07:48:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:38.384 07:48:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.384 07:48:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:38.384 07:48:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.384 07:48:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:38.384 07:48:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.384 07:48:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:38.384 07:48:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.384 07:48:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:38.384 07:48:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.384 07:48:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:38.384 07:48:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.384 07:48:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:38.384 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.384 00:12:38.384 00:12:38.384 CUnit - A unit testing framework for C - Version 2.1-3 00:12:38.384 http://cunit.sourceforge.net/ 00:12:38.384 00:12:38.384 00:12:38.384 Suite: nvme_compliance 00:12:38.384 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 07:48:22.937741] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:38.384 [2024-07-15 07:48:22.939072] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:38.384 [2024-07-15 07:48:22.939086] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:38.384 [2024-07-15 07:48:22.939091] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:38.384 [2024-07-15 07:48:22.940765] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:38.384 passed 00:12:38.384 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 07:48:23.023345] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:38.384 [2024-07-15 07:48:23.026366] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:38.384 passed 00:12:38.384 Test: admin_identify_ns ...[2024-07-15 07:48:23.103702] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:38.669 [2024-07-15 07:48:23.167245] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:38.669 [2024-07-15 07:48:23.175235] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:38.669 [2024-07-15 07:48:23.196323] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:38.669 passed 00:12:38.669 Test: admin_get_features_mandatory_features ...[2024-07-15 07:48:23.273712] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:38.669 [2024-07-15 07:48:23.276732] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:38.669 passed 00:12:38.669 Test: admin_get_features_optional_features ...[2024-07-15 07:48:23.353237] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:38.669 [2024-07-15 07:48:23.356256] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:38.669 passed 00:12:38.927 Test: admin_set_features_number_of_queues ...[2024-07-15 07:48:23.435273] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:38.927 [2024-07-15 07:48:23.541309] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:38.927 passed 00:12:38.927 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 07:48:23.616501] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:38.927 [2024-07-15 07:48:23.619529] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:38.927 passed 00:12:39.185 Test: admin_get_log_page_with_lpo ...[2024-07-15 07:48:23.697545] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:39.185 [2024-07-15 07:48:23.766235] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:39.185 [2024-07-15 07:48:23.779300] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:39.185 passed 00:12:39.185 Test: fabric_property_get ...[2024-07-15 07:48:23.857425] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:39.185 [2024-07-15 07:48:23.858662] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:39.185 [2024-07-15 07:48:23.860447] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:39.185 passed 00:12:39.444 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 07:48:23.941969] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:39.444 [2024-07-15 07:48:23.943216] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:39.444 [2024-07-15 07:48:23.944993] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:39.444 passed 00:12:39.444 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 07:48:24.021677] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:39.444 [2024-07-15 07:48:24.105237] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:39.444 [2024-07-15 07:48:24.121229] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:39.444 [2024-07-15 07:48:24.126323] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:39.444 passed 00:12:39.702 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 07:48:24.204278] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:39.702 [2024-07-15 07:48:24.205509] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:39.702 [2024-07-15 07:48:24.207299] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:39.702 passed 00:12:39.702 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 07:48:24.285261] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:39.702 [2024-07-15 07:48:24.362235] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:39.702 [2024-07-15 07:48:24.386233] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:39.702 [2024-07-15 07:48:24.391318] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:39.702 passed 00:12:39.961 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 07:48:24.470353] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:39.961 [2024-07-15 07:48:24.471581] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:39.961 [2024-07-15 07:48:24.471603] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:39.961 [2024-07-15 07:48:24.473375] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:39.961 passed 00:12:39.961 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 07:48:24.551319] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:39.961 [2024-07-15 07:48:24.644237] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:39.961 [2024-07-15 07:48:24.652232] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:39.961 [2024-07-15 07:48:24.660231] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:39.961 [2024-07-15 07:48:24.668239] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:39.961 [2024-07-15 07:48:24.697326] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:40.220 passed 00:12:40.220 Test: admin_create_io_sq_verify_pc ...[2024-07-15 07:48:24.776416] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:40.220 [2024-07-15 07:48:24.795243] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:40.220 [2024-07-15 07:48:24.812594] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:40.220 passed 00:12:40.220 Test: admin_create_io_qp_max_qps ...[2024-07-15 07:48:24.890107] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:41.599 [2024-07-15 07:48:25.979234] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:41.858 [2024-07-15 07:48:26.365363] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:41.858 passed 00:12:41.858 Test: admin_create_io_sq_shared_cq ...[2024-07-15 07:48:26.441348] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:41.858 [2024-07-15 07:48:26.575234] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:42.116 [2024-07-15 07:48:26.612287] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:42.116 passed 00:12:42.116 00:12:42.116 Run Summary: Type Total Ran Passed Failed Inactive 00:12:42.116 suites 1 1 n/a 0 0 00:12:42.116 tests 18 18 18 0 0 00:12:42.116 asserts 360 360 360 0 n/a 00:12:42.116 00:12:42.116 Elapsed time = 1.507 seconds 00:12:42.116 07:48:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3183618 00:12:42.116 07:48:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 3183618 ']' 00:12:42.116 07:48:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 3183618 00:12:42.116 07:48:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:12:42.116 07:48:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:42.116 07:48:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3183618 00:12:42.116 07:48:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:42.116 07:48:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:42.116 07:48:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3183618' 00:12:42.116 killing process with pid 3183618 00:12:42.116 07:48:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 3183618 00:12:42.116 07:48:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 3183618 00:12:42.375 07:48:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:42.375 07:48:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:42.375 00:12:42.375 real 0m6.163s 00:12:42.375 user 0m17.598s 00:12:42.375 sys 0m0.484s 00:12:42.375 07:48:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:42.375 07:48:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:42.375 ************************************ 00:12:42.375 END TEST nvmf_vfio_user_nvme_compliance 00:12:42.375 ************************************ 00:12:42.375 07:48:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:42.375 07:48:26 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:42.375 07:48:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:42.375 07:48:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:42.375 07:48:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:42.375 ************************************ 00:12:42.375 START TEST nvmf_vfio_user_fuzz 00:12:42.375 ************************************ 00:12:42.375 07:48:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:42.375 * Looking for test storage... 00:12:42.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3184615 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3184615' 00:12:42.375 Process pid: 3184615 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:42.375 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:42.376 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3184615 00:12:42.376 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 3184615 ']' 00:12:42.376 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.376 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:42.376 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.376 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:42.376 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:43.314 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:43.314 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:12:43.314 07:48:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:44.251 07:48:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:44.251 07:48:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.251 07:48:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:44.251 07:48:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.251 07:48:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:44.251 07:48:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:44.251 07:48:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.251 07:48:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:44.251 malloc0 00:12:44.251 07:48:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.251 07:48:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:44.251 07:48:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.251 07:48:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:44.251 07:48:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.251 07:48:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:44.509 07:48:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.509 07:48:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:44.509 07:48:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.509 07:48:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:44.509 07:48:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.509 07:48:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:44.509 07:48:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.509 07:48:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:44.510 07:48:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:16.734 Fuzzing completed. Shutting down the fuzz application 00:13:16.734 00:13:16.734 Dumping successful admin opcodes: 00:13:16.734 8, 9, 10, 24, 00:13:16.734 Dumping successful io opcodes: 00:13:16.734 0, 00:13:16.734 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1036676, total successful commands: 4090, random_seed: 3251362624 00:13:16.734 NS: 0x200003a1ef00 admin qp, Total commands completed: 257533, total successful commands: 2078, random_seed: 1431097216 00:13:16.734 07:48:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:16.734 07:48:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.734 07:48:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:16.734 07:48:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.734 07:48:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3184615 00:13:16.734 07:48:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 3184615 ']' 00:13:16.734 07:48:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 3184615 00:13:16.734 07:48:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:13:16.734 07:48:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:16.734 07:48:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3184615 00:13:16.734 07:48:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:16.734 07:48:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:16.734 07:48:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3184615' 00:13:16.734 killing process with pid 3184615 00:13:16.735 07:48:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 3184615 00:13:16.735 07:48:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 3184615 00:13:16.735 07:48:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:16.735 07:48:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:16.735 00:13:16.735 real 0m32.793s 00:13:16.735 user 0m31.177s 00:13:16.735 sys 0m31.508s 00:13:16.735 07:48:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:16.735 07:48:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:16.735 ************************************ 00:13:16.735 END TEST nvmf_vfio_user_fuzz 00:13:16.735 ************************************ 00:13:16.735 07:48:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:16.735 07:48:59 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:16.735 07:48:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:16.735 07:48:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:16.735 07:48:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:16.735 ************************************ 00:13:16.735 START TEST nvmf_host_management 00:13:16.735 ************************************ 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:16.735 * Looking for test storage... 00:13:16.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:16.735 07:48:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:20.955 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:20.955 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:20.955 Found net devices under 0000:86:00.0: cvl_0_0 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:20.955 Found net devices under 0000:86:00.1: cvl_0_1 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.955 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:20.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:13:20.956 00:13:20.956 --- 10.0.0.2 ping statistics --- 00:13:20.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.956 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:20.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:13:20.956 00:13:20.956 --- 10.0.0.1 ping statistics --- 00:13:20.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.956 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:20.956 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:21.215 07:49:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:21.215 07:49:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:21.215 07:49:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:21.215 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:21.215 07:49:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:21.215 07:49:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:21.215 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3193131 00:13:21.215 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3193131 00:13:21.215 07:49:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:21.215 07:49:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3193131 ']' 00:13:21.215 07:49:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.215 07:49:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:21.215 07:49:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.215 07:49:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:21.215 07:49:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:21.215 [2024-07-15 07:49:05.778022] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:13:21.215 [2024-07-15 07:49:05.778068] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.215 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.216 [2024-07-15 07:49:05.847935] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:21.216 [2024-07-15 07:49:05.924432] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.216 [2024-07-15 07:49:05.924476] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.216 [2024-07-15 07:49:05.924483] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.216 [2024-07-15 07:49:05.924489] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.216 [2024-07-15 07:49:05.924494] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.216 [2024-07-15 07:49:05.924616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:21.216 [2024-07-15 07:49:05.924723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:21.216 [2024-07-15 07:49:05.924829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.216 [2024-07-15 07:49:05.924830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:22.152 [2024-07-15 07:49:06.629242] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:22.152 Malloc0 00:13:22.152 [2024-07-15 07:49:06.689245] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3193353 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3193353 /var/tmp/bdevperf.sock 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3193353 ']' 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:22.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:22.152 07:49:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:22.152 { 00:13:22.152 "params": { 00:13:22.152 "name": "Nvme$subsystem", 00:13:22.152 "trtype": "$TEST_TRANSPORT", 00:13:22.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:22.152 "adrfam": "ipv4", 00:13:22.152 "trsvcid": "$NVMF_PORT", 00:13:22.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:22.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:22.152 "hdgst": ${hdgst:-false}, 00:13:22.152 "ddgst": ${ddgst:-false} 00:13:22.152 }, 00:13:22.152 "method": "bdev_nvme_attach_controller" 00:13:22.152 } 00:13:22.152 EOF 00:13:22.152 )") 00:13:22.153 07:49:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:22.153 07:49:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:22.153 07:49:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:22.153 07:49:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:22.153 "params": { 00:13:22.153 "name": "Nvme0", 00:13:22.153 "trtype": "tcp", 00:13:22.153 "traddr": "10.0.0.2", 00:13:22.153 "adrfam": "ipv4", 00:13:22.153 "trsvcid": "4420", 00:13:22.153 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:22.153 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:22.153 "hdgst": false, 00:13:22.153 "ddgst": false 00:13:22.153 }, 00:13:22.153 "method": "bdev_nvme_attach_controller" 00:13:22.153 }' 00:13:22.153 [2024-07-15 07:49:06.779434] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:13:22.153 [2024-07-15 07:49:06.779481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3193353 ] 00:13:22.153 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.153 [2024-07-15 07:49:06.849102] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.412 [2024-07-15 07:49:06.923791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.672 Running I/O for 10 seconds... 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=718 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 718 -ge 100 ']' 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:22.932 [2024-07-15 07:49:07.672549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1540460 is same with the state(5) to be set 00:13:22.932 [2024-07-15 07:49:07.672617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1540460 is same with the state(5) to be set 00:13:22.932 [2024-07-15 07:49:07.672624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1540460 is same with the state(5) to be set 00:13:22.932 [2024-07-15 07:49:07.672630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1540460 is same with the state(5) to be set 00:13:22.932 [2024-07-15 07:49:07.672636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1540460 is same with the state(5) to be set 00:13:22.932 [2024-07-15 07:49:07.672642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1540460 is same with the state(5) to be set 00:13:22.932 [2024-07-15 07:49:07.672648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1540460 is same with the state(5) to be set 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.932 07:49:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:22.932 [2024-07-15 07:49:07.679744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.932 [2024-07-15 07:49:07.679779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.932 [2024-07-15 07:49:07.679789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.932 [2024-07-15 07:49:07.679796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.932 [2024-07-15 07:49:07.679803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.932 [2024-07-15 07:49:07.679810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.932 [2024-07-15 07:49:07.679819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.932 [2024-07-15 07:49:07.679826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.932 [2024-07-15 07:49:07.679839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf11980 is same with the state(5) to be set 00:13:22.932 [2024-07-15 07:49:07.679875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.932 [2024-07-15 07:49:07.679884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.932 [2024-07-15 07:49:07.679901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.679909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.679918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.679927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.679936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.679945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.679954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.679962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.679972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.679980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.679990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.679999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.933 [2024-07-15 07:49:07.680646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.933 [2024-07-15 07:49:07.680654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.934 [2024-07-15 07:49:07.680664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.934 [2024-07-15 07:49:07.680672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.934 [2024-07-15 07:49:07.680681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.934 [2024-07-15 07:49:07.680689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.934 [2024-07-15 07:49:07.680698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.934 [2024-07-15 07:49:07.680707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.934 [2024-07-15 07:49:07.680716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.934 [2024-07-15 07:49:07.680725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.934 [2024-07-15 07:49:07.680734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.934 [2024-07-15 07:49:07.680743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.934 [2024-07-15 07:49:07.680752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.934 [2024-07-15 07:49:07.680760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.934 [2024-07-15 07:49:07.680769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.934 [2024-07-15 07:49:07.680778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.934 [2024-07-15 07:49:07.680788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.934 [2024-07-15 07:49:07.680796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.934 [2024-07-15 07:49:07.680806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.934 [2024-07-15 07:49:07.680814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.934 [2024-07-15 07:49:07.680824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.934 [2024-07-15 07:49:07.680831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.934 [2024-07-15 07:49:07.680841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.934 [2024-07-15 07:49:07.680849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.934 [2024-07-15 07:49:07.680858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.934 [2024-07-15 07:49:07.680868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.934 [2024-07-15 07:49:07.680878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.934 [2024-07-15 07:49:07.680885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.934 [2024-07-15 07:49:07.680895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.934 [2024-07-15 07:49:07.680904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.934 [2024-07-15 07:49:07.680913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.934 [2024-07-15 07:49:07.680921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.934 [2024-07-15 07:49:07.680930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.934 [2024-07-15 07:49:07.680939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.934 [2024-07-15 07:49:07.680948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.934 [2024-07-15 07:49:07.680956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.934 [2024-07-15 07:49:07.680965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.934 [2024-07-15 07:49:07.680973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.934 [2024-07-15 07:49:07.680984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.934 [2024-07-15 07:49:07.680992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.934 [2024-07-15 07:49:07.681004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.934 [2024-07-15 07:49:07.681012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.934 [2024-07-15 07:49:07.681022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:22.934 [2024-07-15 07:49:07.681030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.934 [2024-07-15 07:49:07.681094] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1322b20 was disconnected and freed. reset controller. 00:13:22.934 [2024-07-15 07:49:07.681998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:22.934 task offset: 106496 on job bdev=Nvme0n1 fails 00:13:22.934 00:13:22.934 Latency(us) 00:13:22.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.934 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:22.934 Job: Nvme0n1 ended in about 0.44 seconds with error 00:13:22.934 Verification LBA range: start 0x0 length 0x400 00:13:22.934 Nvme0n1 : 0.44 1911.28 119.46 147.02 0.00 30294.01 1823.61 27012.23 00:13:22.934 =================================================================================================================== 00:13:22.934 Total : 1911.28 119.46 147.02 0.00 30294.01 1823.61 27012.23 00:13:22.934 [2024-07-15 07:49:07.683664] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:22.934 [2024-07-15 07:49:07.683681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf11980 (9): Bad file descriptor 00:13:23.193 07:49:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.193 07:49:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:23.193 [2024-07-15 07:49:07.735353] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:24.127 07:49:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3193353 00:13:24.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3193353) - No such process 00:13:24.127 07:49:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:24.127 07:49:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:24.127 07:49:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:24.127 07:49:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:24.127 07:49:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:24.127 07:49:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:24.127 07:49:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:24.127 07:49:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:24.127 { 00:13:24.127 "params": { 00:13:24.127 "name": "Nvme$subsystem", 00:13:24.127 "trtype": "$TEST_TRANSPORT", 00:13:24.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:24.127 "adrfam": "ipv4", 00:13:24.127 "trsvcid": "$NVMF_PORT", 00:13:24.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:24.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:24.127 "hdgst": ${hdgst:-false}, 00:13:24.127 "ddgst": ${ddgst:-false} 00:13:24.127 }, 00:13:24.127 "method": "bdev_nvme_attach_controller" 00:13:24.127 } 00:13:24.127 EOF 00:13:24.127 )") 00:13:24.127 07:49:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:24.127 07:49:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:24.127 07:49:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:24.127 07:49:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:24.127 "params": { 00:13:24.127 "name": "Nvme0", 00:13:24.127 "trtype": "tcp", 00:13:24.127 "traddr": "10.0.0.2", 00:13:24.127 "adrfam": "ipv4", 00:13:24.127 "trsvcid": "4420", 00:13:24.127 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:24.127 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:24.127 "hdgst": false, 00:13:24.127 "ddgst": false 00:13:24.127 }, 00:13:24.127 "method": "bdev_nvme_attach_controller" 00:13:24.127 }' 00:13:24.127 [2024-07-15 07:49:08.739973] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:13:24.127 [2024-07-15 07:49:08.740022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3193649 ] 00:13:24.127 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.127 [2024-07-15 07:49:08.807811] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.127 [2024-07-15 07:49:08.878058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.386 Running I/O for 1 seconds... 00:13:25.321 00:13:25.321 Latency(us) 00:13:25.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.321 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:25.321 Verification LBA range: start 0x0 length 0x400 00:13:25.321 Nvme0n1 : 1.02 1936.93 121.06 0.00 0.00 32528.29 6468.12 27240.18 00:13:25.321 =================================================================================================================== 00:13:25.321 Total : 1936.93 121.06 0.00 0.00 32528.29 6468.12 27240.18 00:13:25.579 07:49:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:25.579 07:49:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:25.579 07:49:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:25.579 07:49:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:25.579 07:49:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:25.579 07:49:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:25.579 07:49:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:25.579 07:49:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:25.579 07:49:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:25.579 07:49:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:25.579 07:49:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:25.579 rmmod nvme_tcp 00:13:25.579 rmmod nvme_fabrics 00:13:25.579 rmmod nvme_keyring 00:13:25.579 07:49:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:25.579 07:49:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:25.579 07:49:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:25.579 07:49:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3193131 ']' 00:13:25.579 07:49:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3193131 00:13:25.579 07:49:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 3193131 ']' 00:13:25.579 07:49:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 3193131 00:13:25.579 07:49:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:13:25.579 07:49:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:25.838 07:49:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3193131 00:13:25.838 07:49:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:25.838 07:49:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:25.838 07:49:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3193131' 00:13:25.838 killing process with pid 3193131 00:13:25.838 07:49:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 3193131 00:13:25.838 07:49:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 3193131 00:13:25.839 [2024-07-15 07:49:10.552564] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:25.839 07:49:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:25.839 07:49:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:25.839 07:49:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:25.839 07:49:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:25.839 07:49:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:25.839 07:49:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.839 07:49:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.839 07:49:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.376 07:49:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:28.376 07:49:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:28.376 00:13:28.376 real 0m12.807s 00:13:28.376 user 0m22.562s 00:13:28.376 sys 0m5.540s 00:13:28.376 07:49:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:28.376 07:49:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:28.376 ************************************ 00:13:28.376 END TEST nvmf_host_management 00:13:28.376 ************************************ 00:13:28.376 07:49:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:28.376 07:49:12 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:28.376 07:49:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:28.376 07:49:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.376 07:49:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:28.376 ************************************ 00:13:28.376 START TEST nvmf_lvol 00:13:28.376 ************************************ 00:13:28.376 07:49:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:28.376 * Looking for test storage... 00:13:28.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:28.376 07:49:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:28.376 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:28.376 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.376 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.376 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.376 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.376 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.376 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.376 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:28.377 07:49:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:33.650 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:33.650 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:33.650 Found net devices under 0000:86:00.0: cvl_0_0 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:33.650 Found net devices under 0000:86:00.1: cvl_0_1 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:33.650 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:33.909 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:33.909 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:33.909 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:33.909 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:33.909 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:33.909 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:33.909 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:33.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:33.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:13:33.909 00:13:33.909 --- 10.0.0.2 ping statistics --- 00:13:33.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.909 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:13:33.909 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:33.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:33.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:13:33.909 00:13:33.909 --- 10.0.0.1 ping statistics --- 00:13:33.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.909 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:13:33.909 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:33.909 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:13:33.909 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:33.910 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:33.910 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:33.910 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:33.910 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:33.910 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:33.910 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:33.910 07:49:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:33.910 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:33.910 07:49:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:33.910 07:49:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:33.910 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3197406 00:13:33.910 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3197406 00:13:33.910 07:49:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:33.910 07:49:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 3197406 ']' 00:13:33.910 07:49:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.910 07:49:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:33.910 07:49:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.910 07:49:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:33.910 07:49:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:33.910 [2024-07-15 07:49:18.647487] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:13:33.910 [2024-07-15 07:49:18.647530] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.168 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.168 [2024-07-15 07:49:18.717274] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:34.168 [2024-07-15 07:49:18.796234] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.168 [2024-07-15 07:49:18.796267] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.168 [2024-07-15 07:49:18.796279] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:34.168 [2024-07-15 07:49:18.796285] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:34.168 [2024-07-15 07:49:18.796290] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.168 [2024-07-15 07:49:18.796333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.168 [2024-07-15 07:49:18.796449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.168 [2024-07-15 07:49:18.796450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.735 07:49:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:34.735 07:49:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:13:34.735 07:49:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:34.735 07:49:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:34.735 07:49:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:34.992 07:49:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.992 07:49:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:34.992 [2024-07-15 07:49:19.657740] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:34.992 07:49:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:35.250 07:49:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:35.250 07:49:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:35.509 07:49:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:35.509 07:49:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:35.766 07:49:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:35.766 07:49:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=1ad7569c-7728-4a72-9f4a-a9d36df71ee6 00:13:35.766 07:49:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1ad7569c-7728-4a72-9f4a-a9d36df71ee6 lvol 20 00:13:36.024 07:49:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=39af4e3d-d4da-43a6-bd6e-69317df56093 00:13:36.024 07:49:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:36.281 07:49:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 39af4e3d-d4da-43a6-bd6e-69317df56093 00:13:36.539 07:49:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:36.539 [2024-07-15 07:49:21.215386] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.539 07:49:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:36.796 07:49:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3197899 00:13:36.796 07:49:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:36.796 07:49:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:36.796 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.731 07:49:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 39af4e3d-d4da-43a6-bd6e-69317df56093 MY_SNAPSHOT 00:13:37.989 07:49:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d14be4eb-d882-4198-9ec5-490eb96cfa5c 00:13:37.990 07:49:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 39af4e3d-d4da-43a6-bd6e-69317df56093 30 00:13:38.248 07:49:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d14be4eb-d882-4198-9ec5-490eb96cfa5c MY_CLONE 00:13:38.507 07:49:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6609f74a-c239-4d36-877a-6b06ce25579e 00:13:38.507 07:49:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6609f74a-c239-4d36-877a-6b06ce25579e 00:13:39.073 07:49:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3197899 00:13:47.218 Initializing NVMe Controllers 00:13:47.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:47.218 Controller IO queue size 128, less than required. 00:13:47.218 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:47.218 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:47.218 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:47.218 Initialization complete. Launching workers. 00:13:47.218 ======================================================== 00:13:47.218 Latency(us) 00:13:47.218 Device Information : IOPS MiB/s Average min max 00:13:47.218 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12113.50 47.32 10574.87 1225.44 112451.72 00:13:47.218 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12031.10 47.00 10646.89 3486.10 49485.03 00:13:47.218 ======================================================== 00:13:47.218 Total : 24144.60 94.31 10610.76 1225.44 112451.72 00:13:47.218 00:13:47.218 07:49:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:47.477 07:49:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 39af4e3d-d4da-43a6-bd6e-69317df56093 00:13:47.737 07:49:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1ad7569c-7728-4a72-9f4a-a9d36df71ee6 00:13:47.737 07:49:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:47.737 07:49:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:47.737 07:49:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:47.737 07:49:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:47.737 07:49:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:47.996 07:49:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:47.996 07:49:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:47.996 07:49:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:47.996 07:49:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:47.996 rmmod nvme_tcp 00:13:47.996 rmmod nvme_fabrics 00:13:47.996 rmmod nvme_keyring 00:13:47.996 07:49:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:47.996 07:49:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:47.996 07:49:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:47.997 07:49:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3197406 ']' 00:13:47.997 07:49:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3197406 00:13:47.997 07:49:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 3197406 ']' 00:13:47.997 07:49:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 3197406 00:13:47.997 07:49:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:13:47.997 07:49:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:47.997 07:49:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3197406 00:13:47.997 07:49:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:47.997 07:49:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:47.997 07:49:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3197406' 00:13:47.997 killing process with pid 3197406 00:13:47.997 07:49:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 3197406 00:13:47.997 07:49:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 3197406 00:13:48.255 07:49:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:48.255 07:49:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:48.255 07:49:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:48.255 07:49:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:48.255 07:49:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:48.255 07:49:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.255 07:49:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.255 07:49:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.160 07:49:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:50.160 00:13:50.160 real 0m22.174s 00:13:50.160 user 1m4.796s 00:13:50.160 sys 0m7.131s 00:13:50.160 07:49:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:50.160 07:49:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:50.160 ************************************ 00:13:50.160 END TEST nvmf_lvol 00:13:50.160 ************************************ 00:13:50.419 07:49:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:50.419 07:49:34 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:50.419 07:49:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:50.419 07:49:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:50.419 07:49:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:50.419 ************************************ 00:13:50.419 START TEST nvmf_lvs_grow 00:13:50.419 ************************************ 00:13:50.419 07:49:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:50.419 * Looking for test storage... 00:13:50.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:50.419 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:50.420 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.420 07:49:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.420 07:49:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.420 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:50.420 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:50.420 07:49:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:13:50.420 07:49:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:56.988 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:56.989 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:56.989 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:56.989 Found net devices under 0000:86:00.0: cvl_0_0 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:56.989 Found net devices under 0000:86:00.1: cvl_0_1 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:56.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:13:56.989 00:13:56.989 --- 10.0.0.2 ping statistics --- 00:13:56.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.989 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:56.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:13:56.989 00:13:56.989 --- 10.0.0.1 ping statistics --- 00:13:56.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.989 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3203270 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3203270 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 3203270 ']' 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:56.989 07:49:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:56.989 [2024-07-15 07:49:40.924761] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:13:56.989 [2024-07-15 07:49:40.924800] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.989 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.989 [2024-07-15 07:49:40.995455] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.989 [2024-07-15 07:49:41.074294] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.989 [2024-07-15 07:49:41.074332] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.989 [2024-07-15 07:49:41.074340] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.989 [2024-07-15 07:49:41.074346] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.989 [2024-07-15 07:49:41.074351] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.989 [2024-07-15 07:49:41.074376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.248 07:49:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:57.248 07:49:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:13:57.248 07:49:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:57.248 07:49:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:57.248 07:49:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:57.248 07:49:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.248 07:49:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:57.248 [2024-07-15 07:49:41.933599] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.248 07:49:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:57.248 07:49:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:57.248 07:49:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:57.248 07:49:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:57.248 ************************************ 00:13:57.248 START TEST lvs_grow_clean 00:13:57.248 ************************************ 00:13:57.248 07:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:13:57.248 07:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:57.248 07:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:57.248 07:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:57.248 07:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:57.248 07:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:57.248 07:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:57.248 07:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:57.248 07:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:57.248 07:49:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:57.507 07:49:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:57.507 07:49:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:57.766 07:49:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3eebbe60-37f6-4cc2-8054-365823214ea9 00:13:57.766 07:49:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3eebbe60-37f6-4cc2-8054-365823214ea9 00:13:57.766 07:49:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:58.025 07:49:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:58.025 07:49:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:58.025 07:49:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3eebbe60-37f6-4cc2-8054-365823214ea9 lvol 150 00:13:58.025 07:49:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f9c0415a-7517-4150-960e-727d9f3dc4c5 00:13:58.025 07:49:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:58.025 07:49:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:58.283 [2024-07-15 07:49:42.887013] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:58.283 [2024-07-15 07:49:42.887067] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:58.283 true 00:13:58.283 07:49:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3eebbe60-37f6-4cc2-8054-365823214ea9 00:13:58.284 07:49:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:58.543 07:49:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:58.543 07:49:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:58.543 07:49:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f9c0415a-7517-4150-960e-727d9f3dc4c5 00:13:58.806 07:49:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:58.806 [2024-07-15 07:49:43.544990] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.806 07:49:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:59.064 07:49:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3203775 00:13:59.064 07:49:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:59.064 07:49:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:59.064 07:49:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3203775 /var/tmp/bdevperf.sock 00:13:59.064 07:49:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 3203775 ']' 00:13:59.064 07:49:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:59.064 07:49:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:59.064 07:49:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:59.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:59.064 07:49:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:59.064 07:49:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:59.064 [2024-07-15 07:49:43.760967] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:13:59.064 [2024-07-15 07:49:43.761013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3203775 ] 00:13:59.064 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.323 [2024-07-15 07:49:43.825454] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.323 [2024-07-15 07:49:43.897688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.890 07:49:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:59.891 07:49:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:13:59.891 07:49:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:00.150 Nvme0n1 00:14:00.150 07:49:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:00.409 [ 00:14:00.409 { 00:14:00.409 "name": "Nvme0n1", 00:14:00.409 "aliases": [ 00:14:00.409 "f9c0415a-7517-4150-960e-727d9f3dc4c5" 00:14:00.409 ], 00:14:00.409 "product_name": "NVMe disk", 00:14:00.409 "block_size": 4096, 00:14:00.409 "num_blocks": 38912, 00:14:00.409 "uuid": "f9c0415a-7517-4150-960e-727d9f3dc4c5", 00:14:00.409 "assigned_rate_limits": { 00:14:00.409 "rw_ios_per_sec": 0, 00:14:00.409 "rw_mbytes_per_sec": 0, 00:14:00.409 "r_mbytes_per_sec": 0, 00:14:00.409 "w_mbytes_per_sec": 0 00:14:00.409 }, 00:14:00.409 "claimed": false, 00:14:00.409 "zoned": false, 00:14:00.409 "supported_io_types": { 00:14:00.409 "read": true, 00:14:00.409 "write": true, 00:14:00.409 "unmap": true, 00:14:00.409 "flush": true, 00:14:00.409 "reset": true, 00:14:00.409 "nvme_admin": true, 00:14:00.409 "nvme_io": true, 00:14:00.409 "nvme_io_md": false, 00:14:00.409 "write_zeroes": true, 00:14:00.409 "zcopy": false, 00:14:00.409 "get_zone_info": false, 00:14:00.409 "zone_management": false, 00:14:00.409 "zone_append": false, 00:14:00.409 "compare": true, 00:14:00.409 "compare_and_write": true, 00:14:00.409 "abort": true, 00:14:00.409 "seek_hole": false, 00:14:00.409 "seek_data": false, 00:14:00.409 "copy": true, 00:14:00.409 "nvme_iov_md": false 00:14:00.409 }, 00:14:00.409 "memory_domains": [ 00:14:00.409 { 00:14:00.409 "dma_device_id": "system", 00:14:00.409 "dma_device_type": 1 00:14:00.409 } 00:14:00.409 ], 00:14:00.409 "driver_specific": { 00:14:00.409 "nvme": [ 00:14:00.409 { 00:14:00.409 "trid": { 00:14:00.409 "trtype": "TCP", 00:14:00.409 "adrfam": "IPv4", 00:14:00.409 "traddr": "10.0.0.2", 00:14:00.409 "trsvcid": "4420", 00:14:00.409 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:00.409 }, 00:14:00.409 "ctrlr_data": { 00:14:00.409 "cntlid": 1, 00:14:00.409 "vendor_id": "0x8086", 00:14:00.409 "model_number": "SPDK bdev Controller", 00:14:00.409 "serial_number": "SPDK0", 00:14:00.409 "firmware_revision": "24.09", 00:14:00.409 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:00.409 "oacs": { 00:14:00.409 "security": 0, 00:14:00.409 "format": 0, 00:14:00.409 "firmware": 0, 00:14:00.409 "ns_manage": 0 00:14:00.409 }, 00:14:00.409 "multi_ctrlr": true, 00:14:00.409 "ana_reporting": false 00:14:00.409 }, 00:14:00.409 "vs": { 00:14:00.409 "nvme_version": "1.3" 00:14:00.409 }, 00:14:00.409 "ns_data": { 00:14:00.409 "id": 1, 00:14:00.409 "can_share": true 00:14:00.409 } 00:14:00.409 } 00:14:00.409 ], 00:14:00.409 "mp_policy": "active_passive" 00:14:00.409 } 00:14:00.409 } 00:14:00.409 ] 00:14:00.409 07:49:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3204009 00:14:00.409 07:49:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:00.409 07:49:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:00.409 Running I/O for 10 seconds... 00:14:01.347 Latency(us) 00:14:01.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:01.347 Nvme0n1 : 1.00 23433.00 91.54 0.00 0.00 0.00 0.00 0.00 00:14:01.347 =================================================================================================================== 00:14:01.347 Total : 23433.00 91.54 0.00 0.00 0.00 0.00 0.00 00:14:01.347 00:14:02.284 07:49:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3eebbe60-37f6-4cc2-8054-365823214ea9 00:14:02.543 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:02.543 Nvme0n1 : 2.00 23528.00 91.91 0.00 0.00 0.00 0.00 0.00 00:14:02.543 =================================================================================================================== 00:14:02.543 Total : 23528.00 91.91 0.00 0.00 0.00 0.00 0.00 00:14:02.543 00:14:02.543 true 00:14:02.543 07:49:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3eebbe60-37f6-4cc2-8054-365823214ea9 00:14:02.543 07:49:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:02.802 07:49:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:02.802 07:49:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:02.802 07:49:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3204009 00:14:03.371 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:03.371 Nvme0n1 : 3.00 23533.00 91.93 0.00 0.00 0.00 0.00 0.00 00:14:03.371 =================================================================================================================== 00:14:03.371 Total : 23533.00 91.93 0.00 0.00 0.00 0.00 0.00 00:14:03.371 00:14:04.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:04.751 Nvme0n1 : 4.00 23555.25 92.01 0.00 0.00 0.00 0.00 0.00 00:14:04.751 =================================================================================================================== 00:14:04.751 Total : 23555.25 92.01 0.00 0.00 0.00 0.00 0.00 00:14:04.751 00:14:05.689 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:05.689 Nvme0n1 : 5.00 23572.00 92.08 0.00 0.00 0.00 0.00 0.00 00:14:05.689 =================================================================================================================== 00:14:05.689 Total : 23572.00 92.08 0.00 0.00 0.00 0.00 0.00 00:14:05.689 00:14:06.664 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:06.664 Nvme0n1 : 6.00 23585.50 92.13 0.00 0.00 0.00 0.00 0.00 00:14:06.664 =================================================================================================================== 00:14:06.664 Total : 23585.50 92.13 0.00 0.00 0.00 0.00 0.00 00:14:06.664 00:14:07.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:07.600 Nvme0n1 : 7.00 23597.86 92.18 0.00 0.00 0.00 0.00 0.00 00:14:07.600 =================================================================================================================== 00:14:07.600 Total : 23597.86 92.18 0.00 0.00 0.00 0.00 0.00 00:14:07.600 00:14:08.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:08.536 Nvme0n1 : 8.00 23612.88 92.24 0.00 0.00 0.00 0.00 0.00 00:14:08.536 =================================================================================================================== 00:14:08.536 Total : 23612.88 92.24 0.00 0.00 0.00 0.00 0.00 00:14:08.536 00:14:09.468 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:09.468 Nvme0n1 : 9.00 23626.89 92.29 0.00 0.00 0.00 0.00 0.00 00:14:09.468 =================================================================================================================== 00:14:09.468 Total : 23626.89 92.29 0.00 0.00 0.00 0.00 0.00 00:14:09.468 00:14:10.443 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:10.443 Nvme0n1 : 10.00 23642.60 92.35 0.00 0.00 0.00 0.00 0.00 00:14:10.443 =================================================================================================================== 00:14:10.443 Total : 23642.60 92.35 0.00 0.00 0.00 0.00 0.00 00:14:10.443 00:14:10.443 00:14:10.443 Latency(us) 00:14:10.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.443 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:10.443 Nvme0n1 : 10.00 23645.32 92.36 0.00 0.00 5409.90 1488.81 11169.61 00:14:10.443 =================================================================================================================== 00:14:10.443 Total : 23645.32 92.36 0.00 0.00 5409.90 1488.81 11169.61 00:14:10.443 0 00:14:10.443 07:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3203775 00:14:10.443 07:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 3203775 ']' 00:14:10.443 07:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 3203775 00:14:10.443 07:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:14:10.443 07:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:10.443 07:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3203775 00:14:10.443 07:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:10.443 07:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:10.443 07:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3203775' 00:14:10.443 killing process with pid 3203775 00:14:10.443 07:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 3203775 00:14:10.443 Received shutdown signal, test time was about 10.000000 seconds 00:14:10.444 00:14:10.444 Latency(us) 00:14:10.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.444 =================================================================================================================== 00:14:10.444 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:10.444 07:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 3203775 00:14:10.702 07:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:10.960 07:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:11.218 07:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:11.218 07:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3eebbe60-37f6-4cc2-8054-365823214ea9 00:14:11.218 07:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:11.218 07:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:11.218 07:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:11.476 [2024-07-15 07:49:56.110847] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:11.476 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3eebbe60-37f6-4cc2-8054-365823214ea9 00:14:11.476 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:11.476 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3eebbe60-37f6-4cc2-8054-365823214ea9 00:14:11.476 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:11.476 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:11.476 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:11.476 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:11.476 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:11.476 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:11.476 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:11.476 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:11.476 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3eebbe60-37f6-4cc2-8054-365823214ea9 00:14:11.735 request: 00:14:11.735 { 00:14:11.735 "uuid": "3eebbe60-37f6-4cc2-8054-365823214ea9", 00:14:11.735 "method": "bdev_lvol_get_lvstores", 00:14:11.735 "req_id": 1 00:14:11.735 } 00:14:11.735 Got JSON-RPC error response 00:14:11.735 response: 00:14:11.735 { 00:14:11.735 "code": -19, 00:14:11.735 "message": "No such device" 00:14:11.735 } 00:14:11.735 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:11.735 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:11.735 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:11.735 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:11.735 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:11.994 aio_bdev 00:14:11.994 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f9c0415a-7517-4150-960e-727d9f3dc4c5 00:14:11.994 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=f9c0415a-7517-4150-960e-727d9f3dc4c5 00:14:11.994 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:11.994 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:14:11.994 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:11.994 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:11.994 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:11.994 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f9c0415a-7517-4150-960e-727d9f3dc4c5 -t 2000 00:14:12.254 [ 00:14:12.254 { 00:14:12.254 "name": "f9c0415a-7517-4150-960e-727d9f3dc4c5", 00:14:12.254 "aliases": [ 00:14:12.254 "lvs/lvol" 00:14:12.254 ], 00:14:12.254 "product_name": "Logical Volume", 00:14:12.254 "block_size": 4096, 00:14:12.254 "num_blocks": 38912, 00:14:12.254 "uuid": "f9c0415a-7517-4150-960e-727d9f3dc4c5", 00:14:12.254 "assigned_rate_limits": { 00:14:12.254 "rw_ios_per_sec": 0, 00:14:12.254 "rw_mbytes_per_sec": 0, 00:14:12.254 "r_mbytes_per_sec": 0, 00:14:12.254 "w_mbytes_per_sec": 0 00:14:12.254 }, 00:14:12.254 "claimed": false, 00:14:12.254 "zoned": false, 00:14:12.254 "supported_io_types": { 00:14:12.254 "read": true, 00:14:12.254 "write": true, 00:14:12.254 "unmap": true, 00:14:12.254 "flush": false, 00:14:12.254 "reset": true, 00:14:12.254 "nvme_admin": false, 00:14:12.254 "nvme_io": false, 00:14:12.254 "nvme_io_md": false, 00:14:12.254 "write_zeroes": true, 00:14:12.254 "zcopy": false, 00:14:12.254 "get_zone_info": false, 00:14:12.254 "zone_management": false, 00:14:12.254 "zone_append": false, 00:14:12.254 "compare": false, 00:14:12.254 "compare_and_write": false, 00:14:12.254 "abort": false, 00:14:12.254 "seek_hole": true, 00:14:12.254 "seek_data": true, 00:14:12.254 "copy": false, 00:14:12.254 "nvme_iov_md": false 00:14:12.254 }, 00:14:12.254 "driver_specific": { 00:14:12.254 "lvol": { 00:14:12.254 "lvol_store_uuid": "3eebbe60-37f6-4cc2-8054-365823214ea9", 00:14:12.254 "base_bdev": "aio_bdev", 00:14:12.254 "thin_provision": false, 00:14:12.254 "num_allocated_clusters": 38, 00:14:12.254 "snapshot": false, 00:14:12.254 "clone": false, 00:14:12.254 "esnap_clone": false 00:14:12.254 } 00:14:12.254 } 00:14:12.254 } 00:14:12.254 ] 00:14:12.254 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:14:12.254 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3eebbe60-37f6-4cc2-8054-365823214ea9 00:14:12.254 07:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:12.513 07:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:12.513 07:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3eebbe60-37f6-4cc2-8054-365823214ea9 00:14:12.513 07:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:12.513 07:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:12.513 07:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f9c0415a-7517-4150-960e-727d9f3dc4c5 00:14:12.772 07:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3eebbe60-37f6-4cc2-8054-365823214ea9 00:14:13.031 07:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:13.031 07:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:13.031 00:14:13.031 real 0m15.776s 00:14:13.031 user 0m15.386s 00:14:13.031 sys 0m1.498s 00:14:13.031 07:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:13.031 07:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:13.031 ************************************ 00:14:13.031 END TEST lvs_grow_clean 00:14:13.031 ************************************ 00:14:13.290 07:49:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:13.290 07:49:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:13.290 07:49:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:13.290 07:49:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:13.290 07:49:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:13.290 ************************************ 00:14:13.290 START TEST lvs_grow_dirty 00:14:13.290 ************************************ 00:14:13.290 07:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:14:13.290 07:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:13.290 07:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:13.290 07:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:13.290 07:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:13.290 07:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:13.290 07:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:13.290 07:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:13.290 07:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:13.290 07:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:13.290 07:49:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:13.549 07:49:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:13.549 07:49:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=86a65090-2e14-494d-96f4-aa8af56d64e0 00:14:13.549 07:49:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a65090-2e14-494d-96f4-aa8af56d64e0 00:14:13.549 07:49:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:13.809 07:49:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:13.809 07:49:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:13.809 07:49:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 86a65090-2e14-494d-96f4-aa8af56d64e0 lvol 150 00:14:14.068 07:49:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6e992ec1-b25c-4e94-a8f2-4a9d788cf3cf 00:14:14.068 07:49:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:14.068 07:49:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:14.068 [2024-07-15 07:49:58.733174] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:14.068 [2024-07-15 07:49:58.733230] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:14.068 true 00:14:14.068 07:49:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:14.068 07:49:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a65090-2e14-494d-96f4-aa8af56d64e0 00:14:14.327 07:49:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:14.327 07:49:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:14.587 07:49:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6e992ec1-b25c-4e94-a8f2-4a9d788cf3cf 00:14:14.587 07:49:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:14.846 [2024-07-15 07:49:59.403175] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.846 07:49:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:14.846 07:49:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3206377 00:14:14.846 07:49:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:14.846 07:49:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:14.846 07:49:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3206377 /var/tmp/bdevperf.sock 00:14:14.846 07:49:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3206377 ']' 00:14:14.846 07:49:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.846 07:49:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:14.846 07:49:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.846 07:49:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:14.846 07:49:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:15.105 [2024-07-15 07:49:59.616919] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:14:15.105 [2024-07-15 07:49:59.616970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3206377 ] 00:14:15.105 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.105 [2024-07-15 07:49:59.681258] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.105 [2024-07-15 07:49:59.753085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.040 07:50:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:16.040 07:50:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:16.040 07:50:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:16.040 Nvme0n1 00:14:16.040 07:50:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:16.299 [ 00:14:16.299 { 00:14:16.299 "name": "Nvme0n1", 00:14:16.299 "aliases": [ 00:14:16.299 "6e992ec1-b25c-4e94-a8f2-4a9d788cf3cf" 00:14:16.299 ], 00:14:16.299 "product_name": "NVMe disk", 00:14:16.299 "block_size": 4096, 00:14:16.299 "num_blocks": 38912, 00:14:16.299 "uuid": "6e992ec1-b25c-4e94-a8f2-4a9d788cf3cf", 00:14:16.299 "assigned_rate_limits": { 00:14:16.299 "rw_ios_per_sec": 0, 00:14:16.299 "rw_mbytes_per_sec": 0, 00:14:16.299 "r_mbytes_per_sec": 0, 00:14:16.299 "w_mbytes_per_sec": 0 00:14:16.299 }, 00:14:16.299 "claimed": false, 00:14:16.299 "zoned": false, 00:14:16.299 "supported_io_types": { 00:14:16.299 "read": true, 00:14:16.299 "write": true, 00:14:16.299 "unmap": true, 00:14:16.299 "flush": true, 00:14:16.299 "reset": true, 00:14:16.299 "nvme_admin": true, 00:14:16.299 "nvme_io": true, 00:14:16.299 "nvme_io_md": false, 00:14:16.299 "write_zeroes": true, 00:14:16.299 "zcopy": false, 00:14:16.299 "get_zone_info": false, 00:14:16.299 "zone_management": false, 00:14:16.299 "zone_append": false, 00:14:16.299 "compare": true, 00:14:16.299 "compare_and_write": true, 00:14:16.299 "abort": true, 00:14:16.299 "seek_hole": false, 00:14:16.299 "seek_data": false, 00:14:16.299 "copy": true, 00:14:16.299 "nvme_iov_md": false 00:14:16.299 }, 00:14:16.299 "memory_domains": [ 00:14:16.299 { 00:14:16.299 "dma_device_id": "system", 00:14:16.299 "dma_device_type": 1 00:14:16.299 } 00:14:16.299 ], 00:14:16.299 "driver_specific": { 00:14:16.299 "nvme": [ 00:14:16.299 { 00:14:16.299 "trid": { 00:14:16.299 "trtype": "TCP", 00:14:16.299 "adrfam": "IPv4", 00:14:16.299 "traddr": "10.0.0.2", 00:14:16.299 "trsvcid": "4420", 00:14:16.299 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:16.299 }, 00:14:16.299 "ctrlr_data": { 00:14:16.299 "cntlid": 1, 00:14:16.299 "vendor_id": "0x8086", 00:14:16.299 "model_number": "SPDK bdev Controller", 00:14:16.299 "serial_number": "SPDK0", 00:14:16.299 "firmware_revision": "24.09", 00:14:16.299 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:16.299 "oacs": { 00:14:16.299 "security": 0, 00:14:16.299 "format": 0, 00:14:16.299 "firmware": 0, 00:14:16.299 "ns_manage": 0 00:14:16.299 }, 00:14:16.299 "multi_ctrlr": true, 00:14:16.299 "ana_reporting": false 00:14:16.299 }, 00:14:16.299 "vs": { 00:14:16.299 "nvme_version": "1.3" 00:14:16.299 }, 00:14:16.299 "ns_data": { 00:14:16.299 "id": 1, 00:14:16.299 "can_share": true 00:14:16.299 } 00:14:16.299 } 00:14:16.299 ], 00:14:16.299 "mp_policy": "active_passive" 00:14:16.299 } 00:14:16.299 } 00:14:16.299 ] 00:14:16.299 07:50:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3206611 00:14:16.299 07:50:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:16.299 07:50:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:16.299 Running I/O for 10 seconds... 00:14:17.236 Latency(us) 00:14:17.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.236 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:17.236 Nvme0n1 : 1.00 23174.00 90.52 0.00 0.00 0.00 0.00 0.00 00:14:17.236 =================================================================================================================== 00:14:17.236 Total : 23174.00 90.52 0.00 0.00 0.00 0.00 0.00 00:14:17.236 00:14:18.173 07:50:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 86a65090-2e14-494d-96f4-aa8af56d64e0 00:14:18.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:18.433 Nvme0n1 : 2.00 23399.50 91.40 0.00 0.00 0.00 0.00 0.00 00:14:18.433 =================================================================================================================== 00:14:18.433 Total : 23399.50 91.40 0.00 0.00 0.00 0.00 0.00 00:14:18.433 00:14:18.433 true 00:14:18.433 07:50:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a65090-2e14-494d-96f4-aa8af56d64e0 00:14:18.433 07:50:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:18.692 07:50:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:18.692 07:50:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:18.692 07:50:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3206611 00:14:19.259 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:19.259 Nvme0n1 : 3.00 23443.67 91.58 0.00 0.00 0.00 0.00 0.00 00:14:19.259 =================================================================================================================== 00:14:19.259 Total : 23443.67 91.58 0.00 0.00 0.00 0.00 0.00 00:14:19.259 00:14:20.635 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:20.635 Nvme0n1 : 4.00 23520.00 91.88 0.00 0.00 0.00 0.00 0.00 00:14:20.635 =================================================================================================================== 00:14:20.635 Total : 23520.00 91.88 0.00 0.00 0.00 0.00 0.00 00:14:20.635 00:14:21.571 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.571 Nvme0n1 : 5.00 23571.00 92.07 0.00 0.00 0.00 0.00 0.00 00:14:21.571 =================================================================================================================== 00:14:21.571 Total : 23571.00 92.07 0.00 0.00 0.00 0.00 0.00 00:14:21.571 00:14:22.510 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:22.510 Nvme0n1 : 6.00 23603.17 92.20 0.00 0.00 0.00 0.00 0.00 00:14:22.510 =================================================================================================================== 00:14:22.510 Total : 23603.17 92.20 0.00 0.00 0.00 0.00 0.00 00:14:22.510 00:14:23.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:23.446 Nvme0n1 : 7.00 23633.29 92.32 0.00 0.00 0.00 0.00 0.00 00:14:23.446 =================================================================================================================== 00:14:23.446 Total : 23633.29 92.32 0.00 0.00 0.00 0.00 0.00 00:14:23.446 00:14:24.385 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:24.385 Nvme0n1 : 8.00 23648.38 92.38 0.00 0.00 0.00 0.00 0.00 00:14:24.385 =================================================================================================================== 00:14:24.385 Total : 23648.38 92.38 0.00 0.00 0.00 0.00 0.00 00:14:24.385 00:14:25.359 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:25.359 Nvme0n1 : 9.00 23649.56 92.38 0.00 0.00 0.00 0.00 0.00 00:14:25.359 =================================================================================================================== 00:14:25.359 Total : 23649.56 92.38 0.00 0.00 0.00 0.00 0.00 00:14:25.359 00:14:26.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.296 Nvme0n1 : 10.00 23648.70 92.38 0.00 0.00 0.00 0.00 0.00 00:14:26.296 =================================================================================================================== 00:14:26.296 Total : 23648.70 92.38 0.00 0.00 0.00 0.00 0.00 00:14:26.296 00:14:26.296 00:14:26.296 Latency(us) 00:14:26.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.296 Nvme0n1 : 10.00 23655.86 92.41 0.00 0.00 5408.06 3248.31 14588.88 00:14:26.296 =================================================================================================================== 00:14:26.296 Total : 23655.86 92.41 0.00 0.00 5408.06 3248.31 14588.88 00:14:26.296 0 00:14:26.296 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3206377 00:14:26.296 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 3206377 ']' 00:14:26.296 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 3206377 00:14:26.296 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:14:26.296 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:26.296 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3206377 00:14:26.556 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:26.556 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:26.556 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3206377' 00:14:26.556 killing process with pid 3206377 00:14:26.556 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 3206377 00:14:26.556 Received shutdown signal, test time was about 10.000000 seconds 00:14:26.556 00:14:26.556 Latency(us) 00:14:26.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.556 =================================================================================================================== 00:14:26.556 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:26.556 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 3206377 00:14:26.556 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:26.816 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:27.075 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a65090-2e14-494d-96f4-aa8af56d64e0 00:14:27.075 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:27.075 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:27.075 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:27.075 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3203270 00:14:27.075 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3203270 00:14:27.335 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3203270 Killed "${NVMF_APP[@]}" "$@" 00:14:27.335 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:27.335 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:27.335 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:27.335 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:27.335 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:27.335 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3208454 00:14:27.335 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3208454 00:14:27.335 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:27.335 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3208454 ']' 00:14:27.335 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.335 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:27.335 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.335 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:27.335 07:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:27.335 [2024-07-15 07:50:11.884993] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:14:27.335 [2024-07-15 07:50:11.885039] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.335 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.335 [2024-07-15 07:50:11.958284] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.335 [2024-07-15 07:50:12.028716] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.335 [2024-07-15 07:50:12.028757] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.335 [2024-07-15 07:50:12.028763] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.335 [2024-07-15 07:50:12.028769] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.335 [2024-07-15 07:50:12.028774] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.335 [2024-07-15 07:50:12.028791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.273 07:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:28.273 07:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:28.274 07:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:28.274 07:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:28.274 07:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:28.274 07:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.274 07:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:28.274 [2024-07-15 07:50:12.892871] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:28.274 [2024-07-15 07:50:12.892958] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:28.274 [2024-07-15 07:50:12.892983] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:28.274 07:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:28.274 07:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6e992ec1-b25c-4e94-a8f2-4a9d788cf3cf 00:14:28.274 07:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=6e992ec1-b25c-4e94-a8f2-4a9d788cf3cf 00:14:28.274 07:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:28.274 07:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:28.274 07:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:28.274 07:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:28.274 07:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:28.533 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6e992ec1-b25c-4e94-a8f2-4a9d788cf3cf -t 2000 00:14:28.533 [ 00:14:28.533 { 00:14:28.533 "name": "6e992ec1-b25c-4e94-a8f2-4a9d788cf3cf", 00:14:28.533 "aliases": [ 00:14:28.533 "lvs/lvol" 00:14:28.533 ], 00:14:28.533 "product_name": "Logical Volume", 00:14:28.533 "block_size": 4096, 00:14:28.533 "num_blocks": 38912, 00:14:28.533 "uuid": "6e992ec1-b25c-4e94-a8f2-4a9d788cf3cf", 00:14:28.533 "assigned_rate_limits": { 00:14:28.533 "rw_ios_per_sec": 0, 00:14:28.533 "rw_mbytes_per_sec": 0, 00:14:28.533 "r_mbytes_per_sec": 0, 00:14:28.533 "w_mbytes_per_sec": 0 00:14:28.533 }, 00:14:28.533 "claimed": false, 00:14:28.533 "zoned": false, 00:14:28.533 "supported_io_types": { 00:14:28.533 "read": true, 00:14:28.533 "write": true, 00:14:28.533 "unmap": true, 00:14:28.533 "flush": false, 00:14:28.533 "reset": true, 00:14:28.533 "nvme_admin": false, 00:14:28.533 "nvme_io": false, 00:14:28.533 "nvme_io_md": false, 00:14:28.533 "write_zeroes": true, 00:14:28.533 "zcopy": false, 00:14:28.533 "get_zone_info": false, 00:14:28.533 "zone_management": false, 00:14:28.533 "zone_append": false, 00:14:28.533 "compare": false, 00:14:28.533 "compare_and_write": false, 00:14:28.533 "abort": false, 00:14:28.533 "seek_hole": true, 00:14:28.533 "seek_data": true, 00:14:28.533 "copy": false, 00:14:28.533 "nvme_iov_md": false 00:14:28.533 }, 00:14:28.533 "driver_specific": { 00:14:28.533 "lvol": { 00:14:28.533 "lvol_store_uuid": "86a65090-2e14-494d-96f4-aa8af56d64e0", 00:14:28.533 "base_bdev": "aio_bdev", 00:14:28.533 "thin_provision": false, 00:14:28.533 "num_allocated_clusters": 38, 00:14:28.533 "snapshot": false, 00:14:28.533 "clone": false, 00:14:28.533 "esnap_clone": false 00:14:28.533 } 00:14:28.533 } 00:14:28.533 } 00:14:28.533 ] 00:14:28.533 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:28.533 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a65090-2e14-494d-96f4-aa8af56d64e0 00:14:28.533 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:28.791 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:28.791 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a65090-2e14-494d-96f4-aa8af56d64e0 00:14:28.791 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:29.050 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:29.050 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:29.050 [2024-07-15 07:50:13.757597] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:29.050 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a65090-2e14-494d-96f4-aa8af56d64e0 00:14:29.050 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:29.050 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a65090-2e14-494d-96f4-aa8af56d64e0 00:14:29.050 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:29.050 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:29.050 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:29.050 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:29.050 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:29.309 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:29.309 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:29.309 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:29.309 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a65090-2e14-494d-96f4-aa8af56d64e0 00:14:29.309 request: 00:14:29.309 { 00:14:29.309 "uuid": "86a65090-2e14-494d-96f4-aa8af56d64e0", 00:14:29.309 "method": "bdev_lvol_get_lvstores", 00:14:29.309 "req_id": 1 00:14:29.309 } 00:14:29.309 Got JSON-RPC error response 00:14:29.309 response: 00:14:29.309 { 00:14:29.309 "code": -19, 00:14:29.309 "message": "No such device" 00:14:29.309 } 00:14:29.309 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:29.309 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:29.309 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:29.309 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:29.309 07:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:29.567 aio_bdev 00:14:29.567 07:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6e992ec1-b25c-4e94-a8f2-4a9d788cf3cf 00:14:29.567 07:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=6e992ec1-b25c-4e94-a8f2-4a9d788cf3cf 00:14:29.567 07:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:29.567 07:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:29.567 07:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:29.567 07:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:29.568 07:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:29.568 07:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6e992ec1-b25c-4e94-a8f2-4a9d788cf3cf -t 2000 00:14:29.827 [ 00:14:29.827 { 00:14:29.827 "name": "6e992ec1-b25c-4e94-a8f2-4a9d788cf3cf", 00:14:29.827 "aliases": [ 00:14:29.827 "lvs/lvol" 00:14:29.827 ], 00:14:29.827 "product_name": "Logical Volume", 00:14:29.827 "block_size": 4096, 00:14:29.827 "num_blocks": 38912, 00:14:29.827 "uuid": "6e992ec1-b25c-4e94-a8f2-4a9d788cf3cf", 00:14:29.827 "assigned_rate_limits": { 00:14:29.827 "rw_ios_per_sec": 0, 00:14:29.827 "rw_mbytes_per_sec": 0, 00:14:29.827 "r_mbytes_per_sec": 0, 00:14:29.827 "w_mbytes_per_sec": 0 00:14:29.827 }, 00:14:29.827 "claimed": false, 00:14:29.827 "zoned": false, 00:14:29.827 "supported_io_types": { 00:14:29.827 "read": true, 00:14:29.827 "write": true, 00:14:29.827 "unmap": true, 00:14:29.827 "flush": false, 00:14:29.827 "reset": true, 00:14:29.827 "nvme_admin": false, 00:14:29.827 "nvme_io": false, 00:14:29.827 "nvme_io_md": false, 00:14:29.827 "write_zeroes": true, 00:14:29.827 "zcopy": false, 00:14:29.827 "get_zone_info": false, 00:14:29.827 "zone_management": false, 00:14:29.827 "zone_append": false, 00:14:29.827 "compare": false, 00:14:29.827 "compare_and_write": false, 00:14:29.827 "abort": false, 00:14:29.827 "seek_hole": true, 00:14:29.827 "seek_data": true, 00:14:29.827 "copy": false, 00:14:29.827 "nvme_iov_md": false 00:14:29.827 }, 00:14:29.827 "driver_specific": { 00:14:29.827 "lvol": { 00:14:29.827 "lvol_store_uuid": "86a65090-2e14-494d-96f4-aa8af56d64e0", 00:14:29.827 "base_bdev": "aio_bdev", 00:14:29.827 "thin_provision": false, 00:14:29.827 "num_allocated_clusters": 38, 00:14:29.827 "snapshot": false, 00:14:29.827 "clone": false, 00:14:29.827 "esnap_clone": false 00:14:29.827 } 00:14:29.827 } 00:14:29.827 } 00:14:29.827 ] 00:14:29.827 07:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:29.827 07:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a65090-2e14-494d-96f4-aa8af56d64e0 00:14:29.827 07:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:30.086 07:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:30.086 07:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a65090-2e14-494d-96f4-aa8af56d64e0 00:14:30.086 07:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:30.086 07:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:30.086 07:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6e992ec1-b25c-4e94-a8f2-4a9d788cf3cf 00:14:30.344 07:50:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 86a65090-2e14-494d-96f4-aa8af56d64e0 00:14:30.603 07:50:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:30.863 07:50:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:30.863 00:14:30.863 real 0m17.573s 00:14:30.863 user 0m45.180s 00:14:30.863 sys 0m3.672s 00:14:30.863 07:50:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:30.863 07:50:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:30.863 ************************************ 00:14:30.863 END TEST lvs_grow_dirty 00:14:30.863 ************************************ 00:14:30.863 07:50:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:30.863 07:50:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:30.863 07:50:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:14:30.863 07:50:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:14:30.863 07:50:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:30.863 07:50:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:30.863 07:50:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:30.863 07:50:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:30.863 07:50:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:30.863 07:50:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:30.863 nvmf_trace.0 00:14:30.863 07:50:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:14:30.863 07:50:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:30.863 07:50:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:30.863 07:50:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:30.863 07:50:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:30.864 07:50:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:30.864 07:50:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:30.864 07:50:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:30.864 rmmod nvme_tcp 00:14:30.864 rmmod nvme_fabrics 00:14:30.864 rmmod nvme_keyring 00:14:30.864 07:50:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:30.864 07:50:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:30.864 07:50:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:30.864 07:50:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3208454 ']' 00:14:30.864 07:50:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3208454 00:14:30.864 07:50:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 3208454 ']' 00:14:30.864 07:50:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 3208454 00:14:30.864 07:50:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:14:30.864 07:50:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:30.864 07:50:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3208454 00:14:30.864 07:50:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:30.864 07:50:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:30.864 07:50:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3208454' 00:14:30.864 killing process with pid 3208454 00:14:30.864 07:50:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 3208454 00:14:30.864 07:50:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 3208454 00:14:31.123 07:50:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:31.123 07:50:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:31.123 07:50:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:31.123 07:50:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:31.123 07:50:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:31.123 07:50:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.123 07:50:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.123 07:50:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.664 07:50:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:33.664 00:14:33.664 real 0m42.877s 00:14:33.664 user 1m6.477s 00:14:33.664 sys 0m9.981s 00:14:33.664 07:50:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:33.664 07:50:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:33.664 ************************************ 00:14:33.664 END TEST nvmf_lvs_grow 00:14:33.664 ************************************ 00:14:33.664 07:50:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:33.664 07:50:17 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:33.664 07:50:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:33.664 07:50:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:33.664 07:50:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:33.664 ************************************ 00:14:33.664 START TEST nvmf_bdev_io_wait 00:14:33.664 ************************************ 00:14:33.664 07:50:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:33.664 * Looking for test storage... 00:14:33.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:33.664 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:33.665 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:33.665 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.665 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.665 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.665 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:33.665 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:33.665 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:14:33.665 07:50:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:38.941 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:38.941 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:38.941 Found net devices under 0000:86:00.0: cvl_0_0 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:38.941 Found net devices under 0000:86:00.1: cvl_0_1 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:38.941 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:39.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:39.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:14:39.200 00:14:39.200 --- 10.0.0.2 ping statistics --- 00:14:39.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.200 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:39.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:39.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:14:39.200 00:14:39.200 --- 10.0.0.1 ping statistics --- 00:14:39.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.200 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3212578 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3212578 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 3212578 ']' 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:39.200 07:50:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:39.200 [2024-07-15 07:50:23.827196] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:14:39.200 [2024-07-15 07:50:23.827266] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.200 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.200 [2024-07-15 07:50:23.896788] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:39.459 [2024-07-15 07:50:23.981292] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.459 [2024-07-15 07:50:23.981329] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.459 [2024-07-15 07:50:23.981336] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:39.459 [2024-07-15 07:50:23.981343] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:39.459 [2024-07-15 07:50:23.981348] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.459 [2024-07-15 07:50:23.981405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.459 [2024-07-15 07:50:23.981512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:39.459 [2024-07-15 07:50:23.981594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.459 [2024-07-15 07:50:23.981595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:40.027 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:40.027 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:14:40.027 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:40.027 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:40.027 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:40.027 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.027 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:40.027 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.027 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:40.027 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.027 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:40.027 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.027 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:40.027 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.027 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:40.027 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.027 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:40.027 [2024-07-15 07:50:24.735636] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:40.027 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.027 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:40.027 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.027 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:40.287 Malloc0 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:40.287 [2024-07-15 07:50:24.806721] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3212756 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3212758 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:40.287 { 00:14:40.287 "params": { 00:14:40.287 "name": "Nvme$subsystem", 00:14:40.287 "trtype": "$TEST_TRANSPORT", 00:14:40.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:40.287 "adrfam": "ipv4", 00:14:40.287 "trsvcid": "$NVMF_PORT", 00:14:40.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:40.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:40.287 "hdgst": ${hdgst:-false}, 00:14:40.287 "ddgst": ${ddgst:-false} 00:14:40.287 }, 00:14:40.287 "method": "bdev_nvme_attach_controller" 00:14:40.287 } 00:14:40.287 EOF 00:14:40.287 )") 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3212760 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:40.287 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:40.287 { 00:14:40.287 "params": { 00:14:40.287 "name": "Nvme$subsystem", 00:14:40.287 "trtype": "$TEST_TRANSPORT", 00:14:40.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:40.287 "adrfam": "ipv4", 00:14:40.288 "trsvcid": "$NVMF_PORT", 00:14:40.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:40.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:40.288 "hdgst": ${hdgst:-false}, 00:14:40.288 "ddgst": ${ddgst:-false} 00:14:40.288 }, 00:14:40.288 "method": "bdev_nvme_attach_controller" 00:14:40.288 } 00:14:40.288 EOF 00:14:40.288 )") 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3212763 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:40.288 { 00:14:40.288 "params": { 00:14:40.288 "name": "Nvme$subsystem", 00:14:40.288 "trtype": "$TEST_TRANSPORT", 00:14:40.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:40.288 "adrfam": "ipv4", 00:14:40.288 "trsvcid": "$NVMF_PORT", 00:14:40.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:40.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:40.288 "hdgst": ${hdgst:-false}, 00:14:40.288 "ddgst": ${ddgst:-false} 00:14:40.288 }, 00:14:40.288 "method": "bdev_nvme_attach_controller" 00:14:40.288 } 00:14:40.288 EOF 00:14:40.288 )") 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:40.288 { 00:14:40.288 "params": { 00:14:40.288 "name": "Nvme$subsystem", 00:14:40.288 "trtype": "$TEST_TRANSPORT", 00:14:40.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:40.288 "adrfam": "ipv4", 00:14:40.288 "trsvcid": "$NVMF_PORT", 00:14:40.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:40.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:40.288 "hdgst": ${hdgst:-false}, 00:14:40.288 "ddgst": ${ddgst:-false} 00:14:40.288 }, 00:14:40.288 "method": "bdev_nvme_attach_controller" 00:14:40.288 } 00:14:40.288 EOF 00:14:40.288 )") 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3212756 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:40.288 "params": { 00:14:40.288 "name": "Nvme1", 00:14:40.288 "trtype": "tcp", 00:14:40.288 "traddr": "10.0.0.2", 00:14:40.288 "adrfam": "ipv4", 00:14:40.288 "trsvcid": "4420", 00:14:40.288 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:40.288 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:40.288 "hdgst": false, 00:14:40.288 "ddgst": false 00:14:40.288 }, 00:14:40.288 "method": "bdev_nvme_attach_controller" 00:14:40.288 }' 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:40.288 "params": { 00:14:40.288 "name": "Nvme1", 00:14:40.288 "trtype": "tcp", 00:14:40.288 "traddr": "10.0.0.2", 00:14:40.288 "adrfam": "ipv4", 00:14:40.288 "trsvcid": "4420", 00:14:40.288 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:40.288 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:40.288 "hdgst": false, 00:14:40.288 "ddgst": false 00:14:40.288 }, 00:14:40.288 "method": "bdev_nvme_attach_controller" 00:14:40.288 }' 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:40.288 "params": { 00:14:40.288 "name": "Nvme1", 00:14:40.288 "trtype": "tcp", 00:14:40.288 "traddr": "10.0.0.2", 00:14:40.288 "adrfam": "ipv4", 00:14:40.288 "trsvcid": "4420", 00:14:40.288 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:40.288 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:40.288 "hdgst": false, 00:14:40.288 "ddgst": false 00:14:40.288 }, 00:14:40.288 "method": "bdev_nvme_attach_controller" 00:14:40.288 }' 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:40.288 07:50:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:40.288 "params": { 00:14:40.288 "name": "Nvme1", 00:14:40.288 "trtype": "tcp", 00:14:40.288 "traddr": "10.0.0.2", 00:14:40.288 "adrfam": "ipv4", 00:14:40.288 "trsvcid": "4420", 00:14:40.288 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:40.288 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:40.288 "hdgst": false, 00:14:40.288 "ddgst": false 00:14:40.288 }, 00:14:40.288 "method": "bdev_nvme_attach_controller" 00:14:40.288 }' 00:14:40.288 [2024-07-15 07:50:24.855936] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:14:40.288 [2024-07-15 07:50:24.855987] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:40.288 [2024-07-15 07:50:24.859270] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:14:40.288 [2024-07-15 07:50:24.859315] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:40.288 [2024-07-15 07:50:24.859720] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:14:40.288 [2024-07-15 07:50:24.859762] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:40.288 [2024-07-15 07:50:24.859781] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:14:40.288 [2024-07-15 07:50:24.859817] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:40.288 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.288 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.288 [2024-07-15 07:50:25.028295] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.548 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.548 [2024-07-15 07:50:25.106158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:40.548 [2024-07-15 07:50:25.119462] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.548 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.548 [2024-07-15 07:50:25.195262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:40.548 [2024-07-15 07:50:25.220746] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.548 [2024-07-15 07:50:25.281698] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.806 [2024-07-15 07:50:25.305364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:14:40.806 [2024-07-15 07:50:25.357663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:40.806 Running I/O for 1 seconds... 00:14:40.806 Running I/O for 1 seconds... 00:14:41.065 Running I/O for 1 seconds... 00:14:41.065 Running I/O for 1 seconds... 00:14:42.001 00:14:42.001 Latency(us) 00:14:42.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.001 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:42.001 Nvme1n1 : 1.01 14385.65 56.19 0.00 0.00 8871.34 5071.92 16640.45 00:14:42.001 =================================================================================================================== 00:14:42.001 Total : 14385.65 56.19 0.00 0.00 8871.34 5071.92 16640.45 00:14:42.001 00:14:42.001 Latency(us) 00:14:42.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.001 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:42.001 Nvme1n1 : 1.01 6701.17 26.18 0.00 0.00 18923.30 9118.05 31457.28 00:14:42.001 =================================================================================================================== 00:14:42.001 Total : 6701.17 26.18 0.00 0.00 18923.30 9118.05 31457.28 00:14:42.001 00:14:42.001 Latency(us) 00:14:42.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.001 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:42.001 Nvme1n1 : 1.00 244816.59 956.31 0.00 0.00 520.84 207.47 641.11 00:14:42.001 =================================================================================================================== 00:14:42.001 Total : 244816.59 956.31 0.00 0.00 520.84 207.47 641.11 00:14:42.001 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3212758 00:14:42.001 00:14:42.001 Latency(us) 00:14:42.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.001 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:42.001 Nvme1n1 : 1.00 7612.27 29.74 0.00 0.00 16771.68 4559.03 46957.97 00:14:42.001 =================================================================================================================== 00:14:42.001 Total : 7612.27 29.74 0.00 0.00 16771.68 4559.03 46957.97 00:14:42.001 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3212760 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3212763 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:42.260 rmmod nvme_tcp 00:14:42.260 rmmod nvme_fabrics 00:14:42.260 rmmod nvme_keyring 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3212578 ']' 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3212578 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 3212578 ']' 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 3212578 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3212578 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3212578' 00:14:42.260 killing process with pid 3212578 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 3212578 00:14:42.260 07:50:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 3212578 00:14:42.519 07:50:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:42.519 07:50:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:42.519 07:50:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:42.519 07:50:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:42.519 07:50:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:42.519 07:50:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.519 07:50:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:42.519 07:50:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.055 07:50:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:45.055 00:14:45.055 real 0m11.296s 00:14:45.055 user 0m19.645s 00:14:45.055 sys 0m6.080s 00:14:45.055 07:50:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:45.055 07:50:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:45.055 ************************************ 00:14:45.055 END TEST nvmf_bdev_io_wait 00:14:45.055 ************************************ 00:14:45.055 07:50:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:45.055 07:50:29 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:45.055 07:50:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:45.055 07:50:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:45.055 07:50:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:45.055 ************************************ 00:14:45.055 START TEST nvmf_queue_depth 00:14:45.055 ************************************ 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:45.055 * Looking for test storage... 00:14:45.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.055 07:50:29 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:14:45.056 07:50:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:50.366 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:50.366 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:50.366 Found net devices under 0000:86:00.0: cvl_0_0 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:50.366 Found net devices under 0000:86:00.1: cvl_0_1 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:50.366 07:50:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:50.366 07:50:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:50.366 07:50:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:50.366 07:50:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:50.366 07:50:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:50.366 07:50:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:50.367 07:50:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:50.367 07:50:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:50.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:50.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:14:50.367 00:14:50.367 --- 10.0.0.2 ping statistics --- 00:14:50.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.367 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:14:50.367 07:50:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:50.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:50.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:14:50.626 00:14:50.626 --- 10.0.0.1 ping statistics --- 00:14:50.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.626 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:14:50.626 07:50:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:50.626 07:50:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:14:50.626 07:50:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:50.626 07:50:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:50.626 07:50:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:50.626 07:50:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:50.626 07:50:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:50.626 07:50:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:50.626 07:50:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:50.626 07:50:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:50.626 07:50:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:50.626 07:50:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:50.626 07:50:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:50.626 07:50:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3216648 00:14:50.626 07:50:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3216648 00:14:50.626 07:50:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:50.626 07:50:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3216648 ']' 00:14:50.626 07:50:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.626 07:50:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:50.626 07:50:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.626 07:50:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:50.626 07:50:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:50.626 [2024-07-15 07:50:35.214862] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:14:50.626 [2024-07-15 07:50:35.214910] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.626 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.626 [2024-07-15 07:50:35.287811] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.626 [2024-07-15 07:50:35.365703] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.626 [2024-07-15 07:50:35.365739] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.626 [2024-07-15 07:50:35.365750] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.626 [2024-07-15 07:50:35.365756] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.626 [2024-07-15 07:50:35.365761] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.626 [2024-07-15 07:50:35.365777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:51.564 [2024-07-15 07:50:36.064162] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:51.564 Malloc0 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:51.564 [2024-07-15 07:50:36.131099] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3216784 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3216784 /var/tmp/bdevperf.sock 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3216784 ']' 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:51.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:51.564 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:51.565 [2024-07-15 07:50:36.178823] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:14:51.565 [2024-07-15 07:50:36.178866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3216784 ] 00:14:51.565 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.565 [2024-07-15 07:50:36.247077] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.823 [2024-07-15 07:50:36.320756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.391 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:52.391 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:52.391 07:50:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:52.391 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.391 07:50:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:52.650 NVMe0n1 00:14:52.650 07:50:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.650 07:50:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:52.650 Running I/O for 10 seconds... 00:15:02.627 00:15:02.627 Latency(us) 00:15:02.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.627 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:02.627 Verification LBA range: start 0x0 length 0x4000 00:15:02.627 NVMe0n1 : 10.06 12245.81 47.84 0.00 0.00 83304.90 18008.15 54708.31 00:15:02.627 =================================================================================================================== 00:15:02.627 Total : 12245.81 47.84 0.00 0.00 83304.90 18008.15 54708.31 00:15:02.627 0 00:15:02.627 07:50:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3216784 00:15:02.627 07:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3216784 ']' 00:15:02.627 07:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3216784 00:15:02.627 07:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:02.627 07:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:02.627 07:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3216784 00:15:02.886 07:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:02.886 07:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:02.886 07:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3216784' 00:15:02.886 killing process with pid 3216784 00:15:02.886 07:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3216784 00:15:02.886 Received shutdown signal, test time was about 10.000000 seconds 00:15:02.886 00:15:02.886 Latency(us) 00:15:02.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.886 =================================================================================================================== 00:15:02.886 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:02.886 07:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3216784 00:15:02.886 07:50:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:02.886 07:50:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:02.886 07:50:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:02.886 07:50:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:02.886 07:50:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:02.886 07:50:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:02.886 07:50:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:02.886 07:50:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:02.886 rmmod nvme_tcp 00:15:02.886 rmmod nvme_fabrics 00:15:02.886 rmmod nvme_keyring 00:15:03.144 07:50:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:03.144 07:50:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:03.144 07:50:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:03.144 07:50:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3216648 ']' 00:15:03.144 07:50:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3216648 00:15:03.144 07:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3216648 ']' 00:15:03.144 07:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3216648 00:15:03.144 07:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:03.144 07:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:03.144 07:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3216648 00:15:03.144 07:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:03.144 07:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:03.144 07:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3216648' 00:15:03.144 killing process with pid 3216648 00:15:03.144 07:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3216648 00:15:03.144 07:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3216648 00:15:03.403 07:50:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:03.403 07:50:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:03.403 07:50:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:03.403 07:50:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:03.403 07:50:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:03.403 07:50:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.403 07:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:03.403 07:50:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.307 07:50:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:05.307 00:15:05.307 real 0m20.687s 00:15:05.307 user 0m25.027s 00:15:05.307 sys 0m5.961s 00:15:05.307 07:50:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:05.307 07:50:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:05.307 ************************************ 00:15:05.307 END TEST nvmf_queue_depth 00:15:05.307 ************************************ 00:15:05.307 07:50:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:05.307 07:50:50 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:05.307 07:50:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:05.307 07:50:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:05.307 07:50:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:05.307 ************************************ 00:15:05.307 START TEST nvmf_target_multipath 00:15:05.307 ************************************ 00:15:05.307 07:50:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:05.566 * Looking for test storage... 00:15:05.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:05.566 07:50:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:12.132 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:12.132 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:15:12.132 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:12.132 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:12.132 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:12.132 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:12.132 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:12.132 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:15:12.132 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:12.133 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:12.133 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:12.133 Found net devices under 0000:86:00.0: cvl_0_0 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:12.133 Found net devices under 0000:86:00.1: cvl_0_1 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:12.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:15:12.133 00:15:12.133 --- 10.0.0.2 ping statistics --- 00:15:12.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.133 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:12.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:15:12.133 00:15:12.133 --- 10.0.0.1 ping statistics --- 00:15:12.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.133 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:12.133 only one NIC for nvmf test 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:12.133 rmmod nvme_tcp 00:15:12.133 rmmod nvme_fabrics 00:15:12.133 rmmod nvme_keyring 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:12.133 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:12.134 07:50:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.134 07:50:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.134 07:50:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.508 07:50:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:13.508 07:50:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:15:13.508 07:50:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:15:13.508 07:50:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:13.508 07:50:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:13.508 07:50:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:13.508 07:50:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:13.508 07:50:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:13.508 07:50:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:13.508 07:50:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:13.508 07:50:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:13.508 07:50:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:13.508 07:50:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:13.508 07:50:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:13.508 07:50:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:13.508 07:50:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:13.508 07:50:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:13.508 07:50:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:13.508 07:50:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.508 07:50:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.508 07:50:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.508 07:50:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:13.508 00:15:13.508 real 0m8.040s 00:15:13.508 user 0m1.663s 00:15:13.508 sys 0m4.369s 00:15:13.508 07:50:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:13.508 07:50:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:13.508 ************************************ 00:15:13.508 END TEST nvmf_target_multipath 00:15:13.508 ************************************ 00:15:13.508 07:50:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:13.508 07:50:58 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:13.508 07:50:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:13.508 07:50:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:13.508 07:50:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:13.508 ************************************ 00:15:13.508 START TEST nvmf_zcopy 00:15:13.508 ************************************ 00:15:13.508 07:50:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:13.508 * Looking for test storage... 00:15:13.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:13.508 07:50:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:13.508 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:13.508 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.508 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.508 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.508 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.508 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.508 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.508 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.508 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.508 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.508 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.767 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:13.767 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:13.767 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.767 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.767 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:13.767 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:13.767 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:13.767 07:50:58 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.767 07:50:58 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.767 07:50:58 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.767 07:50:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.767 07:50:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.768 07:50:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.768 07:50:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:13.768 07:50:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.768 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:13.768 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:13.768 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:13.768 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:13.768 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.768 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.768 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:13.768 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:13.768 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:13.768 07:50:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:13.768 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:13.768 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:13.768 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:13.768 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:13.768 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:13.768 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.768 07:50:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.768 07:50:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.768 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:13.768 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:13.768 07:50:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:15:13.768 07:50:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:19.042 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:19.042 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:19.042 Found net devices under 0000:86:00.0: cvl_0_0 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:19.042 Found net devices under 0000:86:00.1: cvl_0_1 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:19.042 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:19.302 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:19.302 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:19.302 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:19.302 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:19.302 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:19.302 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:19.302 07:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:19.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:19.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:15:19.302 00:15:19.302 --- 10.0.0.2 ping statistics --- 00:15:19.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.302 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:15:19.302 07:51:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:19.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:19.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:15:19.302 00:15:19.302 --- 10.0.0.1 ping statistics --- 00:15:19.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.302 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:15:19.302 07:51:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:19.302 07:51:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:15:19.302 07:51:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:19.302 07:51:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:19.302 07:51:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:19.302 07:51:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:19.302 07:51:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:19.302 07:51:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:19.302 07:51:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:19.302 07:51:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:19.302 07:51:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:19.302 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:19.302 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:19.302 07:51:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3225775 00:15:19.302 07:51:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3225775 00:15:19.302 07:51:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:19.302 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 3225775 ']' 00:15:19.302 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.302 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:19.302 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.302 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:19.302 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:19.561 [2024-07-15 07:51:04.095165] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:15:19.561 [2024-07-15 07:51:04.095208] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.561 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.561 [2024-07-15 07:51:04.166527] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.561 [2024-07-15 07:51:04.245432] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.561 [2024-07-15 07:51:04.245464] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.561 [2024-07-15 07:51:04.245471] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.561 [2024-07-15 07:51:04.245477] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.561 [2024-07-15 07:51:04.245483] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.561 [2024-07-15 07:51:04.245508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:20.498 [2024-07-15 07:51:04.943977] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:20.498 [2024-07-15 07:51:04.960090] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:20.498 malloc0 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:20.498 { 00:15:20.498 "params": { 00:15:20.498 "name": "Nvme$subsystem", 00:15:20.498 "trtype": "$TEST_TRANSPORT", 00:15:20.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:20.498 "adrfam": "ipv4", 00:15:20.498 "trsvcid": "$NVMF_PORT", 00:15:20.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:20.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:20.498 "hdgst": ${hdgst:-false}, 00:15:20.498 "ddgst": ${ddgst:-false} 00:15:20.498 }, 00:15:20.498 "method": "bdev_nvme_attach_controller" 00:15:20.498 } 00:15:20.498 EOF 00:15:20.498 )") 00:15:20.498 07:51:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:20.498 07:51:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:20.498 07:51:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:20.498 07:51:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:20.498 "params": { 00:15:20.498 "name": "Nvme1", 00:15:20.498 "trtype": "tcp", 00:15:20.498 "traddr": "10.0.0.2", 00:15:20.498 "adrfam": "ipv4", 00:15:20.498 "trsvcid": "4420", 00:15:20.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.498 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:20.498 "hdgst": false, 00:15:20.498 "ddgst": false 00:15:20.498 }, 00:15:20.498 "method": "bdev_nvme_attach_controller" 00:15:20.498 }' 00:15:20.498 [2024-07-15 07:51:05.038797] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:15:20.499 [2024-07-15 07:51:05.038840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3225994 ] 00:15:20.499 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.499 [2024-07-15 07:51:05.107311] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.499 [2024-07-15 07:51:05.183638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.065 Running I/O for 10 seconds... 00:15:31.067 00:15:31.067 Latency(us) 00:15:31.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.067 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:31.067 Verification LBA range: start 0x0 length 0x1000 00:15:31.067 Nvme1n1 : 10.01 8599.70 67.19 0.00 0.00 14841.24 2550.21 24276.81 00:15:31.067 =================================================================================================================== 00:15:31.067 Total : 8599.70 67.19 0.00 0.00 14841.24 2550.21 24276.81 00:15:31.067 07:51:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3228032 00:15:31.067 07:51:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:31.067 07:51:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:31.067 07:51:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:31.067 07:51:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:31.067 07:51:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:31.067 07:51:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:31.067 07:51:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:31.067 07:51:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:31.067 { 00:15:31.067 "params": { 00:15:31.067 "name": "Nvme$subsystem", 00:15:31.067 "trtype": "$TEST_TRANSPORT", 00:15:31.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:31.067 "adrfam": "ipv4", 00:15:31.067 "trsvcid": "$NVMF_PORT", 00:15:31.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:31.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:31.067 "hdgst": ${hdgst:-false}, 00:15:31.067 "ddgst": ${ddgst:-false} 00:15:31.067 }, 00:15:31.067 "method": "bdev_nvme_attach_controller" 00:15:31.067 } 00:15:31.067 EOF 00:15:31.067 )") 00:15:31.067 [2024-07-15 07:51:15.765165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.067 [2024-07-15 07:51:15.765195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.067 07:51:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:31.067 07:51:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:31.067 07:51:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:31.067 07:51:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:31.067 "params": { 00:15:31.067 "name": "Nvme1", 00:15:31.067 "trtype": "tcp", 00:15:31.067 "traddr": "10.0.0.2", 00:15:31.067 "adrfam": "ipv4", 00:15:31.067 "trsvcid": "4420", 00:15:31.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:31.067 "hdgst": false, 00:15:31.067 "ddgst": false 00:15:31.067 }, 00:15:31.067 "method": "bdev_nvme_attach_controller" 00:15:31.067 }' 00:15:31.067 [2024-07-15 07:51:15.777163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.067 [2024-07-15 07:51:15.777176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.067 [2024-07-15 07:51:15.789195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.067 [2024-07-15 07:51:15.789206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.067 [2024-07-15 07:51:15.801238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.067 [2024-07-15 07:51:15.801248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.067 [2024-07-15 07:51:15.806400] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:15:31.067 [2024-07-15 07:51:15.806443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3228032 ] 00:15:31.067 [2024-07-15 07:51:15.813262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.067 [2024-07-15 07:51:15.813274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.326 [2024-07-15 07:51:15.825291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.326 [2024-07-15 07:51:15.825302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.326 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.326 [2024-07-15 07:51:15.837320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.326 [2024-07-15 07:51:15.837331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.326 [2024-07-15 07:51:15.849352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.326 [2024-07-15 07:51:15.849362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.326 [2024-07-15 07:51:15.861383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.326 [2024-07-15 07:51:15.861394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.326 [2024-07-15 07:51:15.873416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.326 [2024-07-15 07:51:15.873426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.326 [2024-07-15 07:51:15.875119] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.326 [2024-07-15 07:51:15.885451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.326 [2024-07-15 07:51:15.885465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.326 [2024-07-15 07:51:15.897496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.326 [2024-07-15 07:51:15.897508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.326 [2024-07-15 07:51:15.909521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.326 [2024-07-15 07:51:15.909533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.326 [2024-07-15 07:51:15.921560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.326 [2024-07-15 07:51:15.921581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.326 [2024-07-15 07:51:15.933582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.326 [2024-07-15 07:51:15.933593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.326 [2024-07-15 07:51:15.945614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.326 [2024-07-15 07:51:15.945624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.326 [2024-07-15 07:51:15.952591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.326 [2024-07-15 07:51:15.957646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.326 [2024-07-15 07:51:15.957658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.327 [2024-07-15 07:51:15.969695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.327 [2024-07-15 07:51:15.969714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.327 [2024-07-15 07:51:15.981723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.327 [2024-07-15 07:51:15.981737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.327 [2024-07-15 07:51:15.993747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.327 [2024-07-15 07:51:15.993760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.327 [2024-07-15 07:51:16.005775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.327 [2024-07-15 07:51:16.005787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.327 [2024-07-15 07:51:16.017808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.327 [2024-07-15 07:51:16.017820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.327 [2024-07-15 07:51:16.029843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.327 [2024-07-15 07:51:16.029858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.327 [2024-07-15 07:51:16.041888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.327 [2024-07-15 07:51:16.041907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.327 [2024-07-15 07:51:16.053911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.327 [2024-07-15 07:51:16.053926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.327 [2024-07-15 07:51:16.065974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.327 [2024-07-15 07:51:16.065989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.327 [2024-07-15 07:51:16.077994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.327 [2024-07-15 07:51:16.078009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.586 [2024-07-15 07:51:16.090023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.586 [2024-07-15 07:51:16.090035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.586 [2024-07-15 07:51:16.102056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.586 [2024-07-15 07:51:16.102067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.586 [2024-07-15 07:51:16.114086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.586 [2024-07-15 07:51:16.114097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.586 [2024-07-15 07:51:16.126121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.586 [2024-07-15 07:51:16.126135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.586 [2024-07-15 07:51:16.138156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.586 [2024-07-15 07:51:16.138169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.586 [2024-07-15 07:51:16.150188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.586 [2024-07-15 07:51:16.150199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.586 [2024-07-15 07:51:16.162221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.586 [2024-07-15 07:51:16.162238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.586 [2024-07-15 07:51:16.174261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.586 [2024-07-15 07:51:16.174274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.586 [2024-07-15 07:51:16.186305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.586 [2024-07-15 07:51:16.186316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.586 [2024-07-15 07:51:16.198320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.586 [2024-07-15 07:51:16.198331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.586 [2024-07-15 07:51:16.210359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.586 [2024-07-15 07:51:16.210371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.586 [2024-07-15 07:51:16.260598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.586 [2024-07-15 07:51:16.260616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.586 [2024-07-15 07:51:16.270526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.586 [2024-07-15 07:51:16.270539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.586 Running I/O for 5 seconds... 00:15:31.586 [2024-07-15 07:51:16.286665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.586 [2024-07-15 07:51:16.286685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.586 [2024-07-15 07:51:16.298047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.586 [2024-07-15 07:51:16.298067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.586 [2024-07-15 07:51:16.312259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.586 [2024-07-15 07:51:16.312278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.586 [2024-07-15 07:51:16.326511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.586 [2024-07-15 07:51:16.326531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.586 [2024-07-15 07:51:16.338037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.586 [2024-07-15 07:51:16.338056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.845 [2024-07-15 07:51:16.352824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.845 [2024-07-15 07:51:16.352844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.845 [2024-07-15 07:51:16.364244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.845 [2024-07-15 07:51:16.364280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.845 [2024-07-15 07:51:16.378820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.845 [2024-07-15 07:51:16.378839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.845 [2024-07-15 07:51:16.392598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.845 [2024-07-15 07:51:16.392616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.845 [2024-07-15 07:51:16.401820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.845 [2024-07-15 07:51:16.401839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.845 [2024-07-15 07:51:16.410690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.845 [2024-07-15 07:51:16.410709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.845 [2024-07-15 07:51:16.425143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.845 [2024-07-15 07:51:16.425162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.845 [2024-07-15 07:51:16.439113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.845 [2024-07-15 07:51:16.439132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.845 [2024-07-15 07:51:16.453342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.845 [2024-07-15 07:51:16.453361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.845 [2024-07-15 07:51:16.467275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.845 [2024-07-15 07:51:16.467295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.845 [2024-07-15 07:51:16.481104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.845 [2024-07-15 07:51:16.481127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.845 [2024-07-15 07:51:16.495084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.845 [2024-07-15 07:51:16.495102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.845 [2024-07-15 07:51:16.503885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.845 [2024-07-15 07:51:16.503903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.845 [2024-07-15 07:51:16.513034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.845 [2024-07-15 07:51:16.513052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.845 [2024-07-15 07:51:16.521779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.845 [2024-07-15 07:51:16.521798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.845 [2024-07-15 07:51:16.536611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.845 [2024-07-15 07:51:16.536630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.845 [2024-07-15 07:51:16.552010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.845 [2024-07-15 07:51:16.552029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.845 [2024-07-15 07:51:16.565981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.845 [2024-07-15 07:51:16.566000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.845 [2024-07-15 07:51:16.574979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.845 [2024-07-15 07:51:16.574998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.845 [2024-07-15 07:51:16.589234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.845 [2024-07-15 07:51:16.589252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.104 [2024-07-15 07:51:16.603355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.105 [2024-07-15 07:51:16.603375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.105 [2024-07-15 07:51:16.617032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.105 [2024-07-15 07:51:16.617051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.105 [2024-07-15 07:51:16.626205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.105 [2024-07-15 07:51:16.626229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.105 [2024-07-15 07:51:16.634870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.105 [2024-07-15 07:51:16.634888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.105 [2024-07-15 07:51:16.644291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.105 [2024-07-15 07:51:16.644309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.105 [2024-07-15 07:51:16.652863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.105 [2024-07-15 07:51:16.652881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.105 [2024-07-15 07:51:16.667320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.105 [2024-07-15 07:51:16.667339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.105 [2024-07-15 07:51:16.680876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.105 [2024-07-15 07:51:16.680896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.105 [2024-07-15 07:51:16.689851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.105 [2024-07-15 07:51:16.689869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.105 [2024-07-15 07:51:16.704556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.105 [2024-07-15 07:51:16.704578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.105 [2024-07-15 07:51:16.715709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.105 [2024-07-15 07:51:16.715727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.105 [2024-07-15 07:51:16.730019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.105 [2024-07-15 07:51:16.730044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.105 [2024-07-15 07:51:16.739182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.105 [2024-07-15 07:51:16.739200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.105 [2024-07-15 07:51:16.748135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.105 [2024-07-15 07:51:16.748154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.105 [2024-07-15 07:51:16.756827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.105 [2024-07-15 07:51:16.756845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.105 [2024-07-15 07:51:16.766073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.105 [2024-07-15 07:51:16.766090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.105 [2024-07-15 07:51:16.780863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.105 [2024-07-15 07:51:16.780882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.105 [2024-07-15 07:51:16.792034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.105 [2024-07-15 07:51:16.792053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.105 [2024-07-15 07:51:16.801467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.105 [2024-07-15 07:51:16.801495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.105 [2024-07-15 07:51:16.815965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.105 [2024-07-15 07:51:16.815983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.105 [2024-07-15 07:51:16.830116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.105 [2024-07-15 07:51:16.830135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.105 [2024-07-15 07:51:16.840943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.105 [2024-07-15 07:51:16.840963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.105 [2024-07-15 07:51:16.855608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.105 [2024-07-15 07:51:16.855628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.364 [2024-07-15 07:51:16.864594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.364 [2024-07-15 07:51:16.864618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.364 [2024-07-15 07:51:16.873968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.364 [2024-07-15 07:51:16.873986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.364 [2024-07-15 07:51:16.883379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.364 [2024-07-15 07:51:16.883397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.364 [2024-07-15 07:51:16.898246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.364 [2024-07-15 07:51:16.898264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.364 [2024-07-15 07:51:16.912103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.364 [2024-07-15 07:51:16.912121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.364 [2024-07-15 07:51:16.926179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.364 [2024-07-15 07:51:16.926202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.364 [2024-07-15 07:51:16.940300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.364 [2024-07-15 07:51:16.940319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.364 [2024-07-15 07:51:16.954338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.364 [2024-07-15 07:51:16.954357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.364 [2024-07-15 07:51:16.968026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.364 [2024-07-15 07:51:16.968045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.364 [2024-07-15 07:51:16.981934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.364 [2024-07-15 07:51:16.981953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.364 [2024-07-15 07:51:16.995602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.364 [2024-07-15 07:51:16.995621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.364 [2024-07-15 07:51:17.009535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.364 [2024-07-15 07:51:17.009553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.364 [2024-07-15 07:51:17.023597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.364 [2024-07-15 07:51:17.023615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.364 [2024-07-15 07:51:17.037238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.364 [2024-07-15 07:51:17.037257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.364 [2024-07-15 07:51:17.046134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.364 [2024-07-15 07:51:17.046154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.364 [2024-07-15 07:51:17.055680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.364 [2024-07-15 07:51:17.055699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.364 [2024-07-15 07:51:17.070866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.364 [2024-07-15 07:51:17.070885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.364 [2024-07-15 07:51:17.082057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.364 [2024-07-15 07:51:17.082075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.364 [2024-07-15 07:51:17.097077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.364 [2024-07-15 07:51:17.097096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.364 [2024-07-15 07:51:17.112115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.364 [2024-07-15 07:51:17.112134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.623 [2024-07-15 07:51:17.126385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.623 [2024-07-15 07:51:17.126405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.623 [2024-07-15 07:51:17.140637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.623 [2024-07-15 07:51:17.140656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.623 [2024-07-15 07:51:17.154252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.623 [2024-07-15 07:51:17.154270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.623 [2024-07-15 07:51:17.168043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.623 [2024-07-15 07:51:17.168062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.623 [2024-07-15 07:51:17.181660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.623 [2024-07-15 07:51:17.181682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.623 [2024-07-15 07:51:17.190621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.623 [2024-07-15 07:51:17.190640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.623 [2024-07-15 07:51:17.199942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.623 [2024-07-15 07:51:17.199960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.623 [2024-07-15 07:51:17.209117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.623 [2024-07-15 07:51:17.209135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.623 [2024-07-15 07:51:17.223287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.623 [2024-07-15 07:51:17.223305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.623 [2024-07-15 07:51:17.232295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.623 [2024-07-15 07:51:17.232314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.623 [2024-07-15 07:51:17.241499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.623 [2024-07-15 07:51:17.241517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.623 [2024-07-15 07:51:17.250882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.623 [2024-07-15 07:51:17.250901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.623 [2024-07-15 07:51:17.260254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.623 [2024-07-15 07:51:17.260273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.623 [2024-07-15 07:51:17.274889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.623 [2024-07-15 07:51:17.274909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.623 [2024-07-15 07:51:17.284037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.624 [2024-07-15 07:51:17.284057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.624 [2024-07-15 07:51:17.293310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.624 [2024-07-15 07:51:17.293330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.624 [2024-07-15 07:51:17.302135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.624 [2024-07-15 07:51:17.302155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.624 [2024-07-15 07:51:17.317237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.624 [2024-07-15 07:51:17.317256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.624 [2024-07-15 07:51:17.333178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.624 [2024-07-15 07:51:17.333199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.624 [2024-07-15 07:51:17.342220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.624 [2024-07-15 07:51:17.342245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.624 [2024-07-15 07:51:17.351444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.624 [2024-07-15 07:51:17.351463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.624 [2024-07-15 07:51:17.360876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.624 [2024-07-15 07:51:17.360895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.624 [2024-07-15 07:51:17.370096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.624 [2024-07-15 07:51:17.370115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.883 [2024-07-15 07:51:17.384880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.883 [2024-07-15 07:51:17.384901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.883 [2024-07-15 07:51:17.395555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.883 [2024-07-15 07:51:17.395575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.883 [2024-07-15 07:51:17.404628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.883 [2024-07-15 07:51:17.404647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.883 [2024-07-15 07:51:17.419817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.883 [2024-07-15 07:51:17.419837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.883 [2024-07-15 07:51:17.435304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.883 [2024-07-15 07:51:17.435323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.883 [2024-07-15 07:51:17.449359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.883 [2024-07-15 07:51:17.449378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.883 [2024-07-15 07:51:17.462992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.883 [2024-07-15 07:51:17.463012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.883 [2024-07-15 07:51:17.472724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.883 [2024-07-15 07:51:17.472744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.883 [2024-07-15 07:51:17.486906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.883 [2024-07-15 07:51:17.486925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.883 [2024-07-15 07:51:17.495908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.883 [2024-07-15 07:51:17.495927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.883 [2024-07-15 07:51:17.510143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.883 [2024-07-15 07:51:17.510164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.883 [2024-07-15 07:51:17.523810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.883 [2024-07-15 07:51:17.523830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.883 [2024-07-15 07:51:17.532704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.883 [2024-07-15 07:51:17.532723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.883 [2024-07-15 07:51:17.547417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.883 [2024-07-15 07:51:17.547437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.883 [2024-07-15 07:51:17.558485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.883 [2024-07-15 07:51:17.558505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.883 [2024-07-15 07:51:17.573094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.883 [2024-07-15 07:51:17.573113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.883 [2024-07-15 07:51:17.587346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.883 [2024-07-15 07:51:17.587365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.883 [2024-07-15 07:51:17.602874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.883 [2024-07-15 07:51:17.602894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.883 [2024-07-15 07:51:17.611954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.883 [2024-07-15 07:51:17.611974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.883 [2024-07-15 07:51:17.621593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.883 [2024-07-15 07:51:17.621612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.142 [2024-07-15 07:51:17.635921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.142 [2024-07-15 07:51:17.635941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.142 [2024-07-15 07:51:17.644788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.142 [2024-07-15 07:51:17.644807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.142 [2024-07-15 07:51:17.654022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.142 [2024-07-15 07:51:17.654041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.142 [2024-07-15 07:51:17.662770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.142 [2024-07-15 07:51:17.662789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.142 [2024-07-15 07:51:17.671741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.142 [2024-07-15 07:51:17.671759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.142 [2024-07-15 07:51:17.686291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.142 [2024-07-15 07:51:17.686309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.142 [2024-07-15 07:51:17.695315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.142 [2024-07-15 07:51:17.695336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.142 [2024-07-15 07:51:17.709459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.142 [2024-07-15 07:51:17.709479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.142 [2024-07-15 07:51:17.723655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.142 [2024-07-15 07:51:17.723674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.142 [2024-07-15 07:51:17.737641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.142 [2024-07-15 07:51:17.737660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.142 [2024-07-15 07:51:17.751763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.142 [2024-07-15 07:51:17.751781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.142 [2024-07-15 07:51:17.760797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.142 [2024-07-15 07:51:17.760816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.142 [2024-07-15 07:51:17.775248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.142 [2024-07-15 07:51:17.775267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.142 [2024-07-15 07:51:17.784351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.142 [2024-07-15 07:51:17.784376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.143 [2024-07-15 07:51:17.799027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.143 [2024-07-15 07:51:17.799046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.143 [2024-07-15 07:51:17.809762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.143 [2024-07-15 07:51:17.809782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.143 [2024-07-15 07:51:17.818481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.143 [2024-07-15 07:51:17.818500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.143 [2024-07-15 07:51:17.827079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.143 [2024-07-15 07:51:17.827097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.143 [2024-07-15 07:51:17.836171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.143 [2024-07-15 07:51:17.836189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.143 [2024-07-15 07:51:17.850788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.143 [2024-07-15 07:51:17.850807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.143 [2024-07-15 07:51:17.861720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.143 [2024-07-15 07:51:17.861739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.143 [2024-07-15 07:51:17.876160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.143 [2024-07-15 07:51:17.876179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.143 [2024-07-15 07:51:17.889482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.143 [2024-07-15 07:51:17.889500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.402 [2024-07-15 07:51:17.903741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.402 [2024-07-15 07:51:17.903761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.402 [2024-07-15 07:51:17.914860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.402 [2024-07-15 07:51:17.914879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.402 [2024-07-15 07:51:17.929342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.402 [2024-07-15 07:51:17.929363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.402 [2024-07-15 07:51:17.938370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.402 [2024-07-15 07:51:17.938388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.402 [2024-07-15 07:51:17.952468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.402 [2024-07-15 07:51:17.952497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.402 [2024-07-15 07:51:17.961556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.402 [2024-07-15 07:51:17.961575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.402 [2024-07-15 07:51:17.975914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.402 [2024-07-15 07:51:17.975933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.402 [2024-07-15 07:51:17.989584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.402 [2024-07-15 07:51:17.989602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.402 [2024-07-15 07:51:18.003381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.402 [2024-07-15 07:51:18.003400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.402 [2024-07-15 07:51:18.012386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.402 [2024-07-15 07:51:18.012403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.402 [2024-07-15 07:51:18.027294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.402 [2024-07-15 07:51:18.027312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.402 [2024-07-15 07:51:18.038752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.402 [2024-07-15 07:51:18.038770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.402 [2024-07-15 07:51:18.053410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.402 [2024-07-15 07:51:18.053428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.402 [2024-07-15 07:51:18.064745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.402 [2024-07-15 07:51:18.064764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.402 [2024-07-15 07:51:18.079174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.402 [2024-07-15 07:51:18.079193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.402 [2024-07-15 07:51:18.093067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.402 [2024-07-15 07:51:18.093090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.402 [2024-07-15 07:51:18.107230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.402 [2024-07-15 07:51:18.107250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.402 [2024-07-15 07:51:18.121513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.402 [2024-07-15 07:51:18.121532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.402 [2024-07-15 07:51:18.132803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.402 [2024-07-15 07:51:18.132822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.402 [2024-07-15 07:51:18.141794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.402 [2024-07-15 07:51:18.141812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.402 [2024-07-15 07:51:18.150522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.402 [2024-07-15 07:51:18.150541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.662 [2024-07-15 07:51:18.159405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.662 [2024-07-15 07:51:18.159424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.662 [2024-07-15 07:51:18.168834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.662 [2024-07-15 07:51:18.168852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.662 [2024-07-15 07:51:18.178152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.662 [2024-07-15 07:51:18.178170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.662 [2024-07-15 07:51:18.186970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.662 [2024-07-15 07:51:18.186988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.662 [2024-07-15 07:51:18.195767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.662 [2024-07-15 07:51:18.195786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.662 [2024-07-15 07:51:18.210809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.662 [2024-07-15 07:51:18.210828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.662 [2024-07-15 07:51:18.227063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.662 [2024-07-15 07:51:18.227082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.662 [2024-07-15 07:51:18.237715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.662 [2024-07-15 07:51:18.237734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.662 [2024-07-15 07:51:18.252659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.662 [2024-07-15 07:51:18.252677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.662 [2024-07-15 07:51:18.268499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.662 [2024-07-15 07:51:18.268517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.662 [2024-07-15 07:51:18.282885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.662 [2024-07-15 07:51:18.282904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.662 [2024-07-15 07:51:18.297422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.662 [2024-07-15 07:51:18.297446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.662 [2024-07-15 07:51:18.308020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.662 [2024-07-15 07:51:18.308039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.662 [2024-07-15 07:51:18.322519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.662 [2024-07-15 07:51:18.322538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.662 [2024-07-15 07:51:18.331959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.662 [2024-07-15 07:51:18.331977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.662 [2024-07-15 07:51:18.346001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.662 [2024-07-15 07:51:18.346019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.662 [2024-07-15 07:51:18.359022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.662 [2024-07-15 07:51:18.359041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.662 [2024-07-15 07:51:18.368148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.662 [2024-07-15 07:51:18.368168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.662 [2024-07-15 07:51:18.383097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.662 [2024-07-15 07:51:18.383115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.662 [2024-07-15 07:51:18.398974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.662 [2024-07-15 07:51:18.398993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.662 [2024-07-15 07:51:18.413233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.662 [2024-07-15 07:51:18.413252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.937 [2024-07-15 07:51:18.426906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.937 [2024-07-15 07:51:18.426924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.937 [2024-07-15 07:51:18.440844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.937 [2024-07-15 07:51:18.440863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.937 [2024-07-15 07:51:18.454872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.937 [2024-07-15 07:51:18.454891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.937 [2024-07-15 07:51:18.462417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.937 [2024-07-15 07:51:18.462435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.937 [2024-07-15 07:51:18.471241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.937 [2024-07-15 07:51:18.471259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.937 [2024-07-15 07:51:18.486039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.937 [2024-07-15 07:51:18.486057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.937 [2024-07-15 07:51:18.500041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.937 [2024-07-15 07:51:18.500060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.937 [2024-07-15 07:51:18.509060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.937 [2024-07-15 07:51:18.509078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.937 [2024-07-15 07:51:18.517911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.937 [2024-07-15 07:51:18.517930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.937 [2024-07-15 07:51:18.533068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.937 [2024-07-15 07:51:18.533090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.937 [2024-07-15 07:51:18.548557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.937 [2024-07-15 07:51:18.548576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.937 [2024-07-15 07:51:18.562843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.937 [2024-07-15 07:51:18.562862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.937 [2024-07-15 07:51:18.576681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.937 [2024-07-15 07:51:18.576699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.937 [2024-07-15 07:51:18.585774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.937 [2024-07-15 07:51:18.585793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.937 [2024-07-15 07:51:18.600887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.937 [2024-07-15 07:51:18.600906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.937 [2024-07-15 07:51:18.615731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.937 [2024-07-15 07:51:18.615750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.937 [2024-07-15 07:51:18.630135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.937 [2024-07-15 07:51:18.630154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.937 [2024-07-15 07:51:18.641262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.937 [2024-07-15 07:51:18.641281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.937 [2024-07-15 07:51:18.655552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.937 [2024-07-15 07:51:18.655572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.937 [2024-07-15 07:51:18.669379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.937 [2024-07-15 07:51:18.669399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.937 [2024-07-15 07:51:18.683358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.937 [2024-07-15 07:51:18.683379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.196 [2024-07-15 07:51:18.692493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.196 [2024-07-15 07:51:18.692514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.196 [2024-07-15 07:51:18.701465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.196 [2024-07-15 07:51:18.701485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.196 [2024-07-15 07:51:18.710846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.197 [2024-07-15 07:51:18.710866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.197 [2024-07-15 07:51:18.720184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.197 [2024-07-15 07:51:18.720205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.197 [2024-07-15 07:51:18.734688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.197 [2024-07-15 07:51:18.734708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.197 [2024-07-15 07:51:18.748064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.197 [2024-07-15 07:51:18.748084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.197 [2024-07-15 07:51:18.762184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.197 [2024-07-15 07:51:18.762205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.197 [2024-07-15 07:51:18.771146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.197 [2024-07-15 07:51:18.771179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.197 [2024-07-15 07:51:18.780067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.197 [2024-07-15 07:51:18.780087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.197 [2024-07-15 07:51:18.794427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.197 [2024-07-15 07:51:18.794446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.197 [2024-07-15 07:51:18.803357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.197 [2024-07-15 07:51:18.803376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.197 [2024-07-15 07:51:18.817835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.197 [2024-07-15 07:51:18.817855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.197 [2024-07-15 07:51:18.826694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.197 [2024-07-15 07:51:18.826714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.197 [2024-07-15 07:51:18.835933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.197 [2024-07-15 07:51:18.835954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.197 [2024-07-15 07:51:18.850437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.197 [2024-07-15 07:51:18.850456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.197 [2024-07-15 07:51:18.859551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.197 [2024-07-15 07:51:18.859570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.197 [2024-07-15 07:51:18.873996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.197 [2024-07-15 07:51:18.874017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.197 [2024-07-15 07:51:18.883267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.197 [2024-07-15 07:51:18.883286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.197 [2024-07-15 07:51:18.892506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.197 [2024-07-15 07:51:18.892525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.197 [2024-07-15 07:51:18.906994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.197 [2024-07-15 07:51:18.907014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.197 [2024-07-15 07:51:18.915936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.197 [2024-07-15 07:51:18.915955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.197 [2024-07-15 07:51:18.925093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.197 [2024-07-15 07:51:18.925112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.197 [2024-07-15 07:51:18.939810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.197 [2024-07-15 07:51:18.939830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.456 [2024-07-15 07:51:18.950799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.456 [2024-07-15 07:51:18.950819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.456 [2024-07-15 07:51:18.965264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.456 [2024-07-15 07:51:18.965283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.456 [2024-07-15 07:51:18.974188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.456 [2024-07-15 07:51:18.974207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.456 [2024-07-15 07:51:18.988634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.456 [2024-07-15 07:51:18.988662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.456 [2024-07-15 07:51:19.002952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.456 [2024-07-15 07:51:19.002972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.456 [2024-07-15 07:51:19.011857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.456 [2024-07-15 07:51:19.011876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.456 [2024-07-15 07:51:19.026054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.456 [2024-07-15 07:51:19.026074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.456 [2024-07-15 07:51:19.039751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.456 [2024-07-15 07:51:19.039771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.456 [2024-07-15 07:51:19.053411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.456 [2024-07-15 07:51:19.053430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.456 [2024-07-15 07:51:19.067095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.456 [2024-07-15 07:51:19.067115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.456 [2024-07-15 07:51:19.076074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.456 [2024-07-15 07:51:19.076093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.456 [2024-07-15 07:51:19.090171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.456 [2024-07-15 07:51:19.090190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.456 [2024-07-15 07:51:19.099086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.456 [2024-07-15 07:51:19.099105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.456 [2024-07-15 07:51:19.113477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.456 [2024-07-15 07:51:19.113496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.456 [2024-07-15 07:51:19.127163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.456 [2024-07-15 07:51:19.127182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.456 [2024-07-15 07:51:19.140995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.456 [2024-07-15 07:51:19.141014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.456 [2024-07-15 07:51:19.154988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.456 [2024-07-15 07:51:19.155007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.456 [2024-07-15 07:51:19.168725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.456 [2024-07-15 07:51:19.168744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.456 [2024-07-15 07:51:19.177853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.456 [2024-07-15 07:51:19.177872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.456 [2024-07-15 07:51:19.192216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.456 [2024-07-15 07:51:19.192242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.456 [2024-07-15 07:51:19.201123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.456 [2024-07-15 07:51:19.201142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.715 [2024-07-15 07:51:19.215657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.715 [2024-07-15 07:51:19.215676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.715 [2024-07-15 07:51:19.229654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.715 [2024-07-15 07:51:19.229673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.715 [2024-07-15 07:51:19.243606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.715 [2024-07-15 07:51:19.243624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.715 [2024-07-15 07:51:19.257448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.715 [2024-07-15 07:51:19.257467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.715 [2024-07-15 07:51:19.271265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.715 [2024-07-15 07:51:19.271285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.715 [2024-07-15 07:51:19.285018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.715 [2024-07-15 07:51:19.285037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.715 [2024-07-15 07:51:19.299185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.715 [2024-07-15 07:51:19.299204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.715 [2024-07-15 07:51:19.313047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.715 [2024-07-15 07:51:19.313066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.715 [2024-07-15 07:51:19.326728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.715 [2024-07-15 07:51:19.326747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.715 [2024-07-15 07:51:19.340996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.715 [2024-07-15 07:51:19.341015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.715 [2024-07-15 07:51:19.351993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.715 [2024-07-15 07:51:19.352012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.715 [2024-07-15 07:51:19.366125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.715 [2024-07-15 07:51:19.366144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.715 [2024-07-15 07:51:19.375104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.715 [2024-07-15 07:51:19.375123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.715 [2024-07-15 07:51:19.389192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.715 [2024-07-15 07:51:19.389210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.715 [2024-07-15 07:51:19.397982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.715 [2024-07-15 07:51:19.398000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.715 [2024-07-15 07:51:19.412668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.715 [2024-07-15 07:51:19.412687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.715 [2024-07-15 07:51:19.426423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.715 [2024-07-15 07:51:19.426442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.715 [2024-07-15 07:51:19.440712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.715 [2024-07-15 07:51:19.440731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.715 [2024-07-15 07:51:19.451660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.715 [2024-07-15 07:51:19.451679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.715 [2024-07-15 07:51:19.460471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.715 [2024-07-15 07:51:19.460489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.975 [2024-07-15 07:51:19.475251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.975 [2024-07-15 07:51:19.475271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.975 [2024-07-15 07:51:19.486063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.975 [2024-07-15 07:51:19.486083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.975 [2024-07-15 07:51:19.494924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.975 [2024-07-15 07:51:19.494943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.975 [2024-07-15 07:51:19.509573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.975 [2024-07-15 07:51:19.509592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.975 [2024-07-15 07:51:19.518399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.975 [2024-07-15 07:51:19.518417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.975 [2024-07-15 07:51:19.532702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.975 [2024-07-15 07:51:19.532721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.975 [2024-07-15 07:51:19.541539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.975 [2024-07-15 07:51:19.541558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.975 [2024-07-15 07:51:19.550232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.975 [2024-07-15 07:51:19.550250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.975 [2024-07-15 07:51:19.564873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.975 [2024-07-15 07:51:19.564892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.975 [2024-07-15 07:51:19.575767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.975 [2024-07-15 07:51:19.575786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.975 [2024-07-15 07:51:19.590525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.975 [2024-07-15 07:51:19.590544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.975 [2024-07-15 07:51:19.598099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.975 [2024-07-15 07:51:19.598118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.975 [2024-07-15 07:51:19.606983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.975 [2024-07-15 07:51:19.607002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.975 [2024-07-15 07:51:19.616507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.975 [2024-07-15 07:51:19.616526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.975 [2024-07-15 07:51:19.625808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.975 [2024-07-15 07:51:19.625827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.975 [2024-07-15 07:51:19.640241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.975 [2024-07-15 07:51:19.640260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.975 [2024-07-15 07:51:19.654142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.975 [2024-07-15 07:51:19.654160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.975 [2024-07-15 07:51:19.664982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.975 [2024-07-15 07:51:19.665000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.975 [2024-07-15 07:51:19.674422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.975 [2024-07-15 07:51:19.674440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.975 [2024-07-15 07:51:19.683820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.975 [2024-07-15 07:51:19.683839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.975 [2024-07-15 07:51:19.698607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.975 [2024-07-15 07:51:19.698626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.975 [2024-07-15 07:51:19.712641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.975 [2024-07-15 07:51:19.712661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.975 [2024-07-15 07:51:19.726690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.975 [2024-07-15 07:51:19.726710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.234 [2024-07-15 07:51:19.740440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.234 [2024-07-15 07:51:19.740463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.234 [2024-07-15 07:51:19.754715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.234 [2024-07-15 07:51:19.754734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.234 [2024-07-15 07:51:19.765662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.234 [2024-07-15 07:51:19.765681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.234 [2024-07-15 07:51:19.774798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.234 [2024-07-15 07:51:19.774816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.234 [2024-07-15 07:51:19.789490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.234 [2024-07-15 07:51:19.789509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.234 [2024-07-15 07:51:19.800343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.234 [2024-07-15 07:51:19.800362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.234 [2024-07-15 07:51:19.814272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.234 [2024-07-15 07:51:19.814291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.234 [2024-07-15 07:51:19.828331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.234 [2024-07-15 07:51:19.828351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.234 [2024-07-15 07:51:19.837280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.234 [2024-07-15 07:51:19.837298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.234 [2024-07-15 07:51:19.846020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.234 [2024-07-15 07:51:19.846039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.234 [2024-07-15 07:51:19.855203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.234 [2024-07-15 07:51:19.855222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.234 [2024-07-15 07:51:19.869697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.234 [2024-07-15 07:51:19.869716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.234 [2024-07-15 07:51:19.883774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.234 [2024-07-15 07:51:19.883792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.234 [2024-07-15 07:51:19.899330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.234 [2024-07-15 07:51:19.899349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.234 [2024-07-15 07:51:19.908187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.234 [2024-07-15 07:51:19.908206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.234 [2024-07-15 07:51:19.917349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.234 [2024-07-15 07:51:19.917368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.234 [2024-07-15 07:51:19.926535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.234 [2024-07-15 07:51:19.926553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.234 [2024-07-15 07:51:19.940833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.234 [2024-07-15 07:51:19.940852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.234 [2024-07-15 07:51:19.954503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.234 [2024-07-15 07:51:19.954522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.234 [2024-07-15 07:51:19.963401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.234 [2024-07-15 07:51:19.963419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.234 [2024-07-15 07:51:19.972584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.234 [2024-07-15 07:51:19.972603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.234 [2024-07-15 07:51:19.981864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.234 [2024-07-15 07:51:19.981883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.493 [2024-07-15 07:51:19.996669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.493 [2024-07-15 07:51:19.996688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.493 [2024-07-15 07:51:20.010932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.493 [2024-07-15 07:51:20.010951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.493 [2024-07-15 07:51:20.027267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.493 [2024-07-15 07:51:20.027287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.493 [2024-07-15 07:51:20.036402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.493 [2024-07-15 07:51:20.036423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.493 [2024-07-15 07:51:20.051230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.493 [2024-07-15 07:51:20.051249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.493 [2024-07-15 07:51:20.062296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.493 [2024-07-15 07:51:20.062314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.493 [2024-07-15 07:51:20.071206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.493 [2024-07-15 07:51:20.071233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.493 [2024-07-15 07:51:20.083690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.493 [2024-07-15 07:51:20.083715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.493 [2024-07-15 07:51:20.099498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.493 [2024-07-15 07:51:20.099520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.493 [2024-07-15 07:51:20.110145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.493 [2024-07-15 07:51:20.110167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.493 [2024-07-15 07:51:20.124683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.493 [2024-07-15 07:51:20.124705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.493 [2024-07-15 07:51:20.138348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.493 [2024-07-15 07:51:20.138373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.493 [2024-07-15 07:51:20.147320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.493 [2024-07-15 07:51:20.147340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.493 [2024-07-15 07:51:20.156605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.493 [2024-07-15 07:51:20.156625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.493 [2024-07-15 07:51:20.165564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.493 [2024-07-15 07:51:20.165584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.493 [2024-07-15 07:51:20.180212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.493 [2024-07-15 07:51:20.180238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.493 [2024-07-15 07:51:20.193918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.493 [2024-07-15 07:51:20.193939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.493 [2024-07-15 07:51:20.207997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.493 [2024-07-15 07:51:20.208016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.493 [2024-07-15 07:51:20.221788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.493 [2024-07-15 07:51:20.221808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.493 [2024-07-15 07:51:20.235847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.493 [2024-07-15 07:51:20.235866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.753 [2024-07-15 07:51:20.249758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.753 [2024-07-15 07:51:20.249777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.753 [2024-07-15 07:51:20.263735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.753 [2024-07-15 07:51:20.263755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.753 [2024-07-15 07:51:20.272814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.753 [2024-07-15 07:51:20.272834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.753 [2024-07-15 07:51:20.287429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.753 [2024-07-15 07:51:20.287449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.753 [2024-07-15 07:51:20.298735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.753 [2024-07-15 07:51:20.298754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.753 [2024-07-15 07:51:20.313242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.753 [2024-07-15 07:51:20.313263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.753 [2024-07-15 07:51:20.324235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.753 [2024-07-15 07:51:20.324254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.753 [2024-07-15 07:51:20.338566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.753 [2024-07-15 07:51:20.338584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.753 [2024-07-15 07:51:20.347568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.753 [2024-07-15 07:51:20.347587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.753 [2024-07-15 07:51:20.362204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.753 [2024-07-15 07:51:20.362230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.753 [2024-07-15 07:51:20.373198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.753 [2024-07-15 07:51:20.373222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.753 [2024-07-15 07:51:20.387314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.753 [2024-07-15 07:51:20.387333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.753 [2024-07-15 07:51:20.396134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.753 [2024-07-15 07:51:20.396153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.753 [2024-07-15 07:51:20.405601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.753 [2024-07-15 07:51:20.405620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.753 [2024-07-15 07:51:20.414300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.753 [2024-07-15 07:51:20.414319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.753 [2024-07-15 07:51:20.428957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.753 [2024-07-15 07:51:20.428976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.753 [2024-07-15 07:51:20.442037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.753 [2024-07-15 07:51:20.442057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.753 [2024-07-15 07:51:20.455846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.753 [2024-07-15 07:51:20.455865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.753 [2024-07-15 07:51:20.464701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.753 [2024-07-15 07:51:20.464720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.753 [2024-07-15 07:51:20.479311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.753 [2024-07-15 07:51:20.479330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.753 [2024-07-15 07:51:20.493053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.753 [2024-07-15 07:51:20.493072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.013 [2024-07-15 07:51:20.507063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.013 [2024-07-15 07:51:20.507084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.013 [2024-07-15 07:51:20.516152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.013 [2024-07-15 07:51:20.516170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.013 [2024-07-15 07:51:20.525604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.013 [2024-07-15 07:51:20.525623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.013 [2024-07-15 07:51:20.534723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.013 [2024-07-15 07:51:20.534741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.013 [2024-07-15 07:51:20.549295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.013 [2024-07-15 07:51:20.549314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.013 [2024-07-15 07:51:20.562792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.013 [2024-07-15 07:51:20.562810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.013 [2024-07-15 07:51:20.576441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.013 [2024-07-15 07:51:20.576460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.013 [2024-07-15 07:51:20.585661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.013 [2024-07-15 07:51:20.585680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.013 [2024-07-15 07:51:20.600299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.013 [2024-07-15 07:51:20.600322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.013 [2024-07-15 07:51:20.611108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.013 [2024-07-15 07:51:20.611126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.013 [2024-07-15 07:51:20.620382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.013 [2024-07-15 07:51:20.620400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.013 [2024-07-15 07:51:20.629003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.013 [2024-07-15 07:51:20.629022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.013 [2024-07-15 07:51:20.638025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.013 [2024-07-15 07:51:20.638044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.013 [2024-07-15 07:51:20.646547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.013 [2024-07-15 07:51:20.646566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.013 [2024-07-15 07:51:20.660968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.013 [2024-07-15 07:51:20.660987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.013 [2024-07-15 07:51:20.674926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.013 [2024-07-15 07:51:20.674944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.013 [2024-07-15 07:51:20.683973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.013 [2024-07-15 07:51:20.683993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.013 [2024-07-15 07:51:20.693169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.013 [2024-07-15 07:51:20.693187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.013 [2024-07-15 07:51:20.702112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.013 [2024-07-15 07:51:20.702131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.013 [2024-07-15 07:51:20.716438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.013 [2024-07-15 07:51:20.716457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.013 [2024-07-15 07:51:20.730552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.013 [2024-07-15 07:51:20.730571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.013 [2024-07-15 07:51:20.739081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.013 [2024-07-15 07:51:20.739100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.013 [2024-07-15 07:51:20.753649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.013 [2024-07-15 07:51:20.753671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.013 [2024-07-15 07:51:20.764690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.013 [2024-07-15 07:51:20.764710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.272 [2024-07-15 07:51:20.779175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.272 [2024-07-15 07:51:20.779195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.272 [2024-07-15 07:51:20.788137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.272 [2024-07-15 07:51:20.788156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.272 [2024-07-15 07:51:20.797445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.272 [2024-07-15 07:51:20.797465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.272 [2024-07-15 07:51:20.806279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.272 [2024-07-15 07:51:20.806304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.272 [2024-07-15 07:51:20.815548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.272 [2024-07-15 07:51:20.815567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.272 [2024-07-15 07:51:20.824849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.272 [2024-07-15 07:51:20.824867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.272 [2024-07-15 07:51:20.833647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.272 [2024-07-15 07:51:20.833666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.272 [2024-07-15 07:51:20.842522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.272 [2024-07-15 07:51:20.842542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.272 [2024-07-15 07:51:20.856838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.272 [2024-07-15 07:51:20.856858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.272 [2024-07-15 07:51:20.870280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.273 [2024-07-15 07:51:20.870300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.273 [2024-07-15 07:51:20.884427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.273 [2024-07-15 07:51:20.884447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.273 [2024-07-15 07:51:20.897417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.273 [2024-07-15 07:51:20.897436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.273 [2024-07-15 07:51:20.906312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.273 [2024-07-15 07:51:20.906331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.273 [2024-07-15 07:51:20.915603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.273 [2024-07-15 07:51:20.915622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.273 [2024-07-15 07:51:20.924437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.273 [2024-07-15 07:51:20.924455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.273 [2024-07-15 07:51:20.938858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.273 [2024-07-15 07:51:20.938877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.273 [2024-07-15 07:51:20.952589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.273 [2024-07-15 07:51:20.952607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.273 [2024-07-15 07:51:20.967081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.273 [2024-07-15 07:51:20.967099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.273 [2024-07-15 07:51:20.982753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.273 [2024-07-15 07:51:20.982771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.273 [2024-07-15 07:51:20.991616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.273 [2024-07-15 07:51:20.991634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.273 [2024-07-15 07:51:21.006454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.273 [2024-07-15 07:51:21.006473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.273 [2024-07-15 07:51:21.017272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.273 [2024-07-15 07:51:21.017291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.532 [2024-07-15 07:51:21.031805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.532 [2024-07-15 07:51:21.031825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.532 [2024-07-15 07:51:21.041005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.532 [2024-07-15 07:51:21.041025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.532 [2024-07-15 07:51:21.055508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.532 [2024-07-15 07:51:21.055527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.532 [2024-07-15 07:51:21.069521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.532 [2024-07-15 07:51:21.069551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.532 [2024-07-15 07:51:21.078653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.532 [2024-07-15 07:51:21.078672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.532 [2024-07-15 07:51:21.088201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.532 [2024-07-15 07:51:21.088219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.532 [2024-07-15 07:51:21.097570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.532 [2024-07-15 07:51:21.097588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.532 [2024-07-15 07:51:21.106262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.532 [2024-07-15 07:51:21.106281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.532 [2024-07-15 07:51:21.120540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.532 [2024-07-15 07:51:21.120559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.532 [2024-07-15 07:51:21.129636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.532 [2024-07-15 07:51:21.129655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.532 [2024-07-15 07:51:21.144119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.532 [2024-07-15 07:51:21.144138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.532 [2024-07-15 07:51:21.157885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.532 [2024-07-15 07:51:21.157904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.532 [2024-07-15 07:51:21.172009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.532 [2024-07-15 07:51:21.172028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.532 [2024-07-15 07:51:21.186191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.532 [2024-07-15 07:51:21.186211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.532 [2024-07-15 07:51:21.195218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.532 [2024-07-15 07:51:21.195241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.532 [2024-07-15 07:51:21.209384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.532 [2024-07-15 07:51:21.209404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.532 [2024-07-15 07:51:21.218659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.532 [2024-07-15 07:51:21.218679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.532 [2024-07-15 07:51:21.228134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.532 [2024-07-15 07:51:21.228153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.532 [2024-07-15 07:51:21.245766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.532 [2024-07-15 07:51:21.245785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.532 [2024-07-15 07:51:21.260380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.532 [2024-07-15 07:51:21.260400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.532 [2024-07-15 07:51:21.273932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.532 [2024-07-15 07:51:21.273951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.791 [2024-07-15 07:51:21.288113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.791 [2024-07-15 07:51:21.288133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.791 00:15:36.791 Latency(us) 00:15:36.791 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.791 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:36.791 Nvme1n1 : 5.01 16648.21 130.06 0.00 0.00 7680.85 3390.78 18578.03 00:15:36.791 =================================================================================================================== 00:15:36.791 Total : 16648.21 130.06 0.00 0.00 7680.85 3390.78 18578.03 00:15:36.791 [2024-07-15 07:51:21.297712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.791 [2024-07-15 07:51:21.297730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.791 [2024-07-15 07:51:21.309742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.791 [2024-07-15 07:51:21.309758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.791 [2024-07-15 07:51:21.321793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.791 [2024-07-15 07:51:21.321809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.792 [2024-07-15 07:51:21.333812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.792 [2024-07-15 07:51:21.333828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.792 [2024-07-15 07:51:21.345843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.792 [2024-07-15 07:51:21.345858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.792 [2024-07-15 07:51:21.357868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.792 [2024-07-15 07:51:21.357882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.792 [2024-07-15 07:51:21.369903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.792 [2024-07-15 07:51:21.369919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.792 [2024-07-15 07:51:21.381935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.792 [2024-07-15 07:51:21.381949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.792 [2024-07-15 07:51:21.393963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.792 [2024-07-15 07:51:21.393978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.792 [2024-07-15 07:51:21.405991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.792 [2024-07-15 07:51:21.406001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.792 [2024-07-15 07:51:21.418031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.792 [2024-07-15 07:51:21.418046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.792 [2024-07-15 07:51:21.430060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.792 [2024-07-15 07:51:21.430071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.792 [2024-07-15 07:51:21.442091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.792 [2024-07-15 07:51:21.442101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.792 [2024-07-15 07:51:21.454125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.792 [2024-07-15 07:51:21.454136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.792 [2024-07-15 07:51:21.466157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.792 [2024-07-15 07:51:21.466167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.792 [2024-07-15 07:51:21.478185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.792 [2024-07-15 07:51:21.478195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3228032) - No such process 00:15:36.792 07:51:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3228032 00:15:36.792 07:51:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.792 07:51:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.792 07:51:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:36.792 07:51:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.792 07:51:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:36.792 07:51:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.792 07:51:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:36.792 delay0 00:15:36.792 07:51:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.792 07:51:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:36.792 07:51:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.792 07:51:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:36.792 07:51:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.792 07:51:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:37.051 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.051 [2024-07-15 07:51:21.620359] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:43.619 Initializing NVMe Controllers 00:15:43.619 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:43.619 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:43.619 Initialization complete. Launching workers. 00:15:43.619 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 99 00:15:43.619 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 386, failed to submit 33 00:15:43.619 success 164, unsuccess 222, failed 0 00:15:43.619 07:51:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:43.619 07:51:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:43.619 07:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:43.619 07:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:43.619 07:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:43.619 07:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:43.619 07:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:43.619 07:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:43.619 rmmod nvme_tcp 00:15:43.619 rmmod nvme_fabrics 00:15:43.619 rmmod nvme_keyring 00:15:43.619 07:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:43.619 07:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:43.619 07:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:43.619 07:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3225775 ']' 00:15:43.619 07:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3225775 00:15:43.619 07:51:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 3225775 ']' 00:15:43.619 07:51:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 3225775 00:15:43.619 07:51:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:15:43.619 07:51:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:43.619 07:51:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3225775 00:15:43.619 07:51:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:43.619 07:51:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:43.619 07:51:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3225775' 00:15:43.619 killing process with pid 3225775 00:15:43.619 07:51:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 3225775 00:15:43.619 07:51:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 3225775 00:15:43.619 07:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:43.619 07:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:43.619 07:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:43.619 07:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:43.619 07:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:43.619 07:51:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.619 07:51:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.619 07:51:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.520 07:51:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:45.520 00:15:45.520 real 0m32.009s 00:15:45.520 user 0m43.406s 00:15:45.520 sys 0m10.744s 00:15:45.520 07:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:45.520 07:51:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:45.520 ************************************ 00:15:45.520 END TEST nvmf_zcopy 00:15:45.520 ************************************ 00:15:45.520 07:51:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:45.520 07:51:30 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:45.520 07:51:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:45.520 07:51:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:45.520 07:51:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:45.520 ************************************ 00:15:45.520 START TEST nvmf_nmic 00:15:45.520 ************************************ 00:15:45.520 07:51:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:45.779 * Looking for test storage... 00:15:45.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:15:45.779 07:51:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:52.345 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:52.345 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:52.345 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:52.346 Found net devices under 0000:86:00.0: cvl_0_0 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:52.346 Found net devices under 0000:86:00.1: cvl_0_1 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:52.346 07:51:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:52.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:15:52.346 00:15:52.346 --- 10.0.0.2 ping statistics --- 00:15:52.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.346 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:52.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:15:52.346 00:15:52.346 --- 10.0.0.1 ping statistics --- 00:15:52.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.346 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3233597 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3233597 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 3233597 ']' 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.346 07:51:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:52.346 [2024-07-15 07:51:36.203846] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:15:52.346 [2024-07-15 07:51:36.203891] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.346 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.346 [2024-07-15 07:51:36.274029] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:52.346 [2024-07-15 07:51:36.355918] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.346 [2024-07-15 07:51:36.355954] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.346 [2024-07-15 07:51:36.355961] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.346 [2024-07-15 07:51:36.355967] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.346 [2024-07-15 07:51:36.355974] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.346 [2024-07-15 07:51:36.356019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.346 [2024-07-15 07:51:36.356129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.346 [2024-07-15 07:51:36.356155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.346 [2024-07-15 07:51:36.356156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:52.346 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:52.346 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:15:52.346 07:51:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:52.346 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:52.346 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:52.346 07:51:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.346 07:51:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:52.346 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.346 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:52.346 [2024-07-15 07:51:37.052211] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:52.346 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.346 07:51:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:52.346 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.346 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:52.346 Malloc0 00:15:52.346 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.346 07:51:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:52.346 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.346 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:52.346 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.346 07:51:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:52.346 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.346 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:52.650 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.650 07:51:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:52.650 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.650 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:52.650 [2024-07-15 07:51:37.104203] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:52.650 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.650 07:51:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:52.650 test case1: single bdev can't be used in multiple subsystems 00:15:52.650 07:51:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:52.650 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.650 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:52.650 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.650 07:51:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:52.650 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.650 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:52.650 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.650 07:51:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:52.650 07:51:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:52.650 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.650 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:52.650 [2024-07-15 07:51:37.128135] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:52.650 [2024-07-15 07:51:37.128155] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:52.650 [2024-07-15 07:51:37.128161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.650 request: 00:15:52.650 { 00:15:52.650 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:52.650 "namespace": { 00:15:52.650 "bdev_name": "Malloc0", 00:15:52.650 "no_auto_visible": false 00:15:52.650 }, 00:15:52.650 "method": "nvmf_subsystem_add_ns", 00:15:52.650 "req_id": 1 00:15:52.650 } 00:15:52.650 Got JSON-RPC error response 00:15:52.650 response: 00:15:52.650 { 00:15:52.650 "code": -32602, 00:15:52.650 "message": "Invalid parameters" 00:15:52.650 } 00:15:52.650 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:52.650 07:51:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:52.650 07:51:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:52.650 07:51:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:52.650 Adding namespace failed - expected result. 00:15:52.650 07:51:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:52.650 test case2: host connect to nvmf target in multiple paths 00:15:52.651 07:51:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:52.651 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.651 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:52.651 [2024-07-15 07:51:37.140255] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:52.651 07:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.651 07:51:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:53.597 07:51:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:54.975 07:51:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:54.975 07:51:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:15:54.975 07:51:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:54.975 07:51:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:54.975 07:51:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:15:56.880 07:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:56.880 07:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:56.880 07:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:56.880 07:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:56.880 07:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:56.880 07:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:15:56.880 07:51:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:56.880 [global] 00:15:56.880 thread=1 00:15:56.880 invalidate=1 00:15:56.880 rw=write 00:15:56.880 time_based=1 00:15:56.880 runtime=1 00:15:56.880 ioengine=libaio 00:15:56.880 direct=1 00:15:56.880 bs=4096 00:15:56.880 iodepth=1 00:15:56.880 norandommap=0 00:15:56.880 numjobs=1 00:15:56.880 00:15:56.880 verify_dump=1 00:15:56.880 verify_backlog=512 00:15:56.880 verify_state_save=0 00:15:56.880 do_verify=1 00:15:56.880 verify=crc32c-intel 00:15:56.880 [job0] 00:15:56.880 filename=/dev/nvme0n1 00:15:56.880 Could not set queue depth (nvme0n1) 00:15:57.139 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:57.139 fio-3.35 00:15:57.139 Starting 1 thread 00:15:58.516 00:15:58.516 job0: (groupid=0, jobs=1): err= 0: pid=3234678: Mon Jul 15 07:51:42 2024 00:15:58.516 read: IOPS=2269, BW=9079KiB/s (9297kB/s)(9088KiB/1001msec) 00:15:58.516 slat (nsec): min=6332, max=25015, avg=7092.23, stdev=817.78 00:15:58.516 clat (usec): min=201, max=410, avg=243.65, stdev=19.11 00:15:58.516 lat (usec): min=208, max=417, avg=250.74, stdev=19.15 00:15:58.516 clat percentiles (usec): 00:15:58.516 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 235], 00:15:58.516 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 243], 00:15:58.516 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 273], 95.00th=[ 281], 00:15:58.516 | 99.00th=[ 289], 99.50th=[ 289], 99.90th=[ 310], 99.95th=[ 392], 00:15:58.516 | 99.99th=[ 412] 00:15:58.516 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:15:58.516 slat (nsec): min=8966, max=40162, avg=9963.59, stdev=1106.63 00:15:58.516 clat (usec): min=118, max=372, avg=154.12, stdev= 9.23 00:15:58.516 lat (usec): min=128, max=412, avg=164.09, stdev= 9.54 00:15:58.516 clat percentiles (usec): 00:15:58.516 | 1.00th=[ 127], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 149], 00:15:58.516 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 157], 00:15:58.516 | 70.00th=[ 157], 80.00th=[ 159], 90.00th=[ 163], 95.00th=[ 165], 00:15:58.516 | 99.00th=[ 172], 99.50th=[ 174], 99.90th=[ 212], 99.95th=[ 229], 00:15:58.516 | 99.99th=[ 371] 00:15:58.516 bw ( KiB/s): min=11840, max=11840, per=100.00%, avg=11840.00, stdev= 0.00, samples=1 00:15:58.516 iops : min= 2960, max= 2960, avg=2960.00, stdev= 0.00, samples=1 00:15:58.516 lat (usec) : 250=89.28%, 500=10.72% 00:15:58.516 cpu : usr=2.70%, sys=4.00%, ctx=4832, majf=0, minf=2 00:15:58.516 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:58.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.516 issued rwts: total=2272,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.516 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:58.516 00:15:58.516 Run status group 0 (all jobs): 00:15:58.516 READ: bw=9079KiB/s (9297kB/s), 9079KiB/s-9079KiB/s (9297kB/s-9297kB/s), io=9088KiB (9306kB), run=1001-1001msec 00:15:58.516 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:15:58.516 00:15:58.516 Disk stats (read/write): 00:15:58.516 nvme0n1: ios=2098/2296, merge=0/0, ticks=683/353, in_queue=1036, util=95.59% 00:15:58.516 07:51:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:58.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:58.516 07:51:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:58.516 07:51:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:15:58.516 07:51:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:58.516 07:51:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:58.516 07:51:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:58.516 07:51:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:58.516 07:51:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:15:58.516 07:51:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:58.516 07:51:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:58.516 07:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:58.516 07:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:15:58.516 07:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:58.516 07:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:15:58.516 07:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:58.516 07:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:58.516 rmmod nvme_tcp 00:15:58.516 rmmod nvme_fabrics 00:15:58.516 rmmod nvme_keyring 00:15:58.516 07:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:58.516 07:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:15:58.517 07:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:15:58.517 07:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3233597 ']' 00:15:58.517 07:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3233597 00:15:58.517 07:51:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 3233597 ']' 00:15:58.517 07:51:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 3233597 00:15:58.517 07:51:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:15:58.517 07:51:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:58.517 07:51:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3233597 00:15:58.517 07:51:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:58.517 07:51:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:58.517 07:51:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3233597' 00:15:58.517 killing process with pid 3233597 00:15:58.517 07:51:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 3233597 00:15:58.517 07:51:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 3233597 00:15:58.775 07:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:58.775 07:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:58.775 07:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:58.775 07:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:58.775 07:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:58.775 07:51:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.775 07:51:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:58.775 07:51:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.311 07:51:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:01.311 00:16:01.311 real 0m15.249s 00:16:01.311 user 0m35.224s 00:16:01.311 sys 0m5.223s 00:16:01.311 07:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:01.311 07:51:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:01.311 ************************************ 00:16:01.311 END TEST nvmf_nmic 00:16:01.311 ************************************ 00:16:01.311 07:51:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:01.311 07:51:45 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:01.311 07:51:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:01.311 07:51:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:01.311 07:51:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:01.311 ************************************ 00:16:01.311 START TEST nvmf_fio_target 00:16:01.311 ************************************ 00:16:01.311 07:51:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:01.311 * Looking for test storage... 00:16:01.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:01.311 07:51:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:01.311 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:01.311 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.311 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.311 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.311 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.311 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.311 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.311 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.311 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.311 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.311 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.311 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.311 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.311 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.311 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.311 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:01.311 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.311 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:01.311 07:51:45 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.311 07:51:45 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.311 07:51:45 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.311 07:51:45 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:01.312 07:51:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.587 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:06.587 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:06.587 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:06.587 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:06.587 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:06.588 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:06.588 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:06.588 Found net devices under 0000:86:00.0: cvl_0_0 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:06.588 Found net devices under 0000:86:00.1: cvl_0_1 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:06.588 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:06.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:06.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:16:06.850 00:16:06.850 --- 10.0.0.2 ping statistics --- 00:16:06.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.850 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:06.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:06.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:16:06.850 00:16:06.850 --- 10.0.0.1 ping statistics --- 00:16:06.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.850 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3238333 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3238333 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 3238333 ']' 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.850 07:51:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.850 [2024-07-15 07:51:51.511999] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:16:06.850 [2024-07-15 07:51:51.512044] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.850 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.850 [2024-07-15 07:51:51.581850] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:07.109 [2024-07-15 07:51:51.661301] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.109 [2024-07-15 07:51:51.661340] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.109 [2024-07-15 07:51:51.661347] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.109 [2024-07-15 07:51:51.661354] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.109 [2024-07-15 07:51:51.661359] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.109 [2024-07-15 07:51:51.661421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.109 [2024-07-15 07:51:51.661449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.109 [2024-07-15 07:51:51.661474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.109 [2024-07-15 07:51:51.661475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:07.677 07:51:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:07.677 07:51:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:16:07.677 07:51:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:07.677 07:51:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:07.677 07:51:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.677 07:51:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.677 07:51:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:07.936 [2024-07-15 07:51:52.510786] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:07.936 07:51:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:08.195 07:51:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:08.195 07:51:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:08.454 07:51:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:08.454 07:51:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:08.454 07:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:08.454 07:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:08.712 07:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:08.712 07:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:08.970 07:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:08.970 07:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:08.970 07:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:09.229 07:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:09.229 07:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:09.488 07:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:09.488 07:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:09.747 07:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:09.747 07:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:09.747 07:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:10.005 07:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:10.005 07:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:10.264 07:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:10.264 [2024-07-15 07:51:55.009205] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:10.522 07:51:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:10.523 07:51:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:10.781 07:51:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:12.159 07:51:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:12.159 07:51:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:16:12.159 07:51:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:12.159 07:51:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:16:12.159 07:51:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:16:12.159 07:51:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:16:14.060 07:51:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:14.060 07:51:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:14.060 07:51:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:14.060 07:51:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:16:14.060 07:51:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:14.060 07:51:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:16:14.060 07:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:14.060 [global] 00:16:14.060 thread=1 00:16:14.060 invalidate=1 00:16:14.060 rw=write 00:16:14.060 time_based=1 00:16:14.060 runtime=1 00:16:14.060 ioengine=libaio 00:16:14.060 direct=1 00:16:14.060 bs=4096 00:16:14.060 iodepth=1 00:16:14.060 norandommap=0 00:16:14.060 numjobs=1 00:16:14.060 00:16:14.060 verify_dump=1 00:16:14.060 verify_backlog=512 00:16:14.060 verify_state_save=0 00:16:14.060 do_verify=1 00:16:14.060 verify=crc32c-intel 00:16:14.060 [job0] 00:16:14.060 filename=/dev/nvme0n1 00:16:14.060 [job1] 00:16:14.060 filename=/dev/nvme0n2 00:16:14.060 [job2] 00:16:14.060 filename=/dev/nvme0n3 00:16:14.060 [job3] 00:16:14.060 filename=/dev/nvme0n4 00:16:14.060 Could not set queue depth (nvme0n1) 00:16:14.060 Could not set queue depth (nvme0n2) 00:16:14.060 Could not set queue depth (nvme0n3) 00:16:14.060 Could not set queue depth (nvme0n4) 00:16:14.322 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:14.322 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:14.322 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:14.322 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:14.322 fio-3.35 00:16:14.322 Starting 4 threads 00:16:15.724 00:16:15.724 job0: (groupid=0, jobs=1): err= 0: pid=3239777: Mon Jul 15 07:52:00 2024 00:16:15.724 read: IOPS=21, BW=86.3KiB/s (88.3kB/s)(88.0KiB/1020msec) 00:16:15.724 slat (nsec): min=6087, max=23170, avg=13297.86, stdev=5577.54 00:16:15.724 clat (usec): min=40886, max=41976, avg=41121.27, stdev=340.27 00:16:15.724 lat (usec): min=40901, max=41986, avg=41134.56, stdev=338.89 00:16:15.724 clat percentiles (usec): 00:16:15.724 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:15.724 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:15.724 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:16:15.724 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:15.724 | 99.99th=[42206] 00:16:15.724 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:16:15.724 slat (nsec): min=5364, max=56646, avg=7499.17, stdev=2529.49 00:16:15.724 clat (usec): min=127, max=4114, avg=214.54, stdev=175.62 00:16:15.724 lat (usec): min=136, max=4171, avg=222.04, stdev=177.80 00:16:15.724 clat percentiles (usec): 00:16:15.724 | 1.00th=[ 135], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 172], 00:16:15.724 | 30.00th=[ 202], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:16:15.724 | 70.00th=[ 227], 80.00th=[ 231], 90.00th=[ 241], 95.00th=[ 247], 00:16:15.724 | 99.00th=[ 255], 99.50th=[ 273], 99.90th=[ 4113], 99.95th=[ 4113], 00:16:15.724 | 99.99th=[ 4113] 00:16:15.724 bw ( KiB/s): min= 4096, max= 4096, per=25.50%, avg=4096.00, stdev= 0.00, samples=1 00:16:15.724 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:15.724 lat (usec) : 250=93.63%, 500=2.06% 00:16:15.724 lat (msec) : 10=0.19%, 50=4.12% 00:16:15.724 cpu : usr=0.20%, sys=0.29%, ctx=535, majf=0, minf=1 00:16:15.724 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:15.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.724 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.724 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:15.724 job1: (groupid=0, jobs=1): err= 0: pid=3239778: Mon Jul 15 07:52:00 2024 00:16:15.724 read: IOPS=21, BW=86.8KiB/s (88.9kB/s)(88.0KiB/1014msec) 00:16:15.724 slat (nsec): min=9721, max=26929, avg=15209.82, stdev=4234.66 00:16:15.724 clat (usec): min=40877, max=42002, avg=41035.01, stdev=223.21 00:16:15.724 lat (usec): min=40904, max=42013, avg=41050.22, stdev=221.95 00:16:15.724 clat percentiles (usec): 00:16:15.724 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:15.724 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:15.724 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:15.724 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:15.724 | 99.99th=[42206] 00:16:15.724 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:16:15.724 slat (nsec): min=9418, max=36499, avg=11546.75, stdev=1912.70 00:16:15.724 clat (usec): min=125, max=855, avg=201.48, stdev=42.23 00:16:15.724 lat (usec): min=137, max=864, avg=213.03, stdev=42.64 00:16:15.724 clat percentiles (usec): 00:16:15.724 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 153], 20.00th=[ 165], 00:16:15.724 | 30.00th=[ 188], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:16:15.724 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 233], 95.00th=[ 241], 00:16:15.724 | 99.00th=[ 273], 99.50th=[ 289], 99.90th=[ 857], 99.95th=[ 857], 00:16:15.724 | 99.99th=[ 857] 00:16:15.724 bw ( KiB/s): min= 4096, max= 4096, per=25.50%, avg=4096.00, stdev= 0.00, samples=1 00:16:15.724 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:15.724 lat (usec) : 250=93.07%, 500=2.62%, 1000=0.19% 00:16:15.724 lat (msec) : 50=4.12% 00:16:15.724 cpu : usr=0.39%, sys=0.49%, ctx=536, majf=0, minf=1 00:16:15.724 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:15.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.724 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.724 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:15.724 job2: (groupid=0, jobs=1): err= 0: pid=3239779: Mon Jul 15 07:52:00 2024 00:16:15.724 read: IOPS=2270, BW=9083KiB/s (9301kB/s)(9092KiB/1001msec) 00:16:15.724 slat (nsec): min=6452, max=29232, avg=7305.20, stdev=941.52 00:16:15.724 clat (usec): min=175, max=490, avg=227.70, stdev=15.43 00:16:15.724 lat (usec): min=182, max=498, avg=235.00, stdev=15.45 00:16:15.724 clat percentiles (usec): 00:16:15.724 | 1.00th=[ 190], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 219], 00:16:15.724 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 227], 60.00th=[ 231], 00:16:15.724 | 70.00th=[ 233], 80.00th=[ 237], 90.00th=[ 243], 95.00th=[ 249], 00:16:15.724 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 322], 99.95th=[ 465], 00:16:15.724 | 99.99th=[ 490] 00:16:15.724 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:15.724 slat (usec): min=9, max=25759, avg=20.64, stdev=508.91 00:16:15.724 clat (usec): min=122, max=2002, avg=157.29, stdev=47.41 00:16:15.724 lat (usec): min=133, max=26099, avg=177.94, stdev=514.69 00:16:15.724 clat percentiles (usec): 00:16:15.724 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:16:15.724 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 149], 00:16:15.724 | 70.00th=[ 153], 80.00th=[ 161], 90.00th=[ 215], 95.00th=[ 223], 00:16:15.724 | 99.00th=[ 251], 99.50th=[ 289], 99.90th=[ 351], 99.95th=[ 416], 00:16:15.724 | 99.99th=[ 2008] 00:16:15.724 bw ( KiB/s): min=10344, max=10344, per=64.40%, avg=10344.00, stdev= 0.00, samples=1 00:16:15.724 iops : min= 2586, max= 2586, avg=2586.00, stdev= 0.00, samples=1 00:16:15.724 lat (usec) : 250=97.33%, 500=2.65% 00:16:15.724 lat (msec) : 4=0.02% 00:16:15.724 cpu : usr=2.80%, sys=4.10%, ctx=4835, majf=0, minf=1 00:16:15.724 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:15.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.724 issued rwts: total=2273,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.724 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:15.724 job3: (groupid=0, jobs=1): err= 0: pid=3239780: Mon Jul 15 07:52:00 2024 00:16:15.724 read: IOPS=39, BW=158KiB/s (162kB/s)(160KiB/1010msec) 00:16:15.724 slat (nsec): min=6900, max=25366, avg=15647.08, stdev=7417.33 00:16:15.724 clat (usec): min=215, max=41107, avg=22646.89, stdev=20501.84 00:16:15.724 lat (usec): min=222, max=41127, avg=22662.54, stdev=20508.41 00:16:15.724 clat percentiles (usec): 00:16:15.724 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 237], 00:16:15.724 | 30.00th=[ 245], 40.00th=[ 285], 50.00th=[40633], 60.00th=[41157], 00:16:15.725 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:15.725 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:15.725 | 99.99th=[41157] 00:16:15.725 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:16:15.725 slat (nsec): min=9186, max=70643, avg=10789.16, stdev=3862.96 00:16:15.725 clat (usec): min=132, max=335, avg=188.50, stdev=20.42 00:16:15.725 lat (usec): min=142, max=388, avg=199.29, stdev=21.65 00:16:15.725 clat percentiles (usec): 00:16:15.725 | 1.00th=[ 145], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 174], 00:16:15.725 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 194], 00:16:15.725 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 215], 00:16:15.725 | 99.00th=[ 235], 99.50th=[ 318], 99.90th=[ 334], 99.95th=[ 334], 00:16:15.725 | 99.99th=[ 334] 00:16:15.725 bw ( KiB/s): min= 4096, max= 4096, per=25.50%, avg=4096.00, stdev= 0.00, samples=1 00:16:15.725 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:15.725 lat (usec) : 250=94.38%, 500=1.45%, 750=0.18% 00:16:15.725 lat (msec) : 50=3.99% 00:16:15.725 cpu : usr=0.30%, sys=0.50%, ctx=552, majf=0, minf=2 00:16:15.725 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:15.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.725 issued rwts: total=40,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.725 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:15.725 00:16:15.725 Run status group 0 (all jobs): 00:16:15.725 READ: bw=9243KiB/s (9465kB/s), 86.3KiB/s-9083KiB/s (88.3kB/s-9301kB/s), io=9428KiB (9654kB), run=1001-1020msec 00:16:15.725 WRITE: bw=15.7MiB/s (16.4MB/s), 2008KiB/s-9.99MiB/s (2056kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1020msec 00:16:15.725 00:16:15.725 Disk stats (read/write): 00:16:15.725 nvme0n1: ios=67/512, merge=0/0, ticks=727/109, in_queue=836, util=86.97% 00:16:15.725 nvme0n2: ios=68/512, merge=0/0, ticks=1460/100, in_queue=1560, util=98.68% 00:16:15.725 nvme0n3: ios=2016/2048, merge=0/0, ticks=1428/316, in_queue=1744, util=98.54% 00:16:15.725 nvme0n4: ios=36/512, merge=0/0, ticks=742/92, in_queue=834, util=89.61% 00:16:15.725 07:52:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:15.725 [global] 00:16:15.725 thread=1 00:16:15.725 invalidate=1 00:16:15.725 rw=randwrite 00:16:15.725 time_based=1 00:16:15.725 runtime=1 00:16:15.725 ioengine=libaio 00:16:15.725 direct=1 00:16:15.725 bs=4096 00:16:15.725 iodepth=1 00:16:15.725 norandommap=0 00:16:15.725 numjobs=1 00:16:15.725 00:16:15.725 verify_dump=1 00:16:15.725 verify_backlog=512 00:16:15.725 verify_state_save=0 00:16:15.725 do_verify=1 00:16:15.725 verify=crc32c-intel 00:16:15.725 [job0] 00:16:15.725 filename=/dev/nvme0n1 00:16:15.725 [job1] 00:16:15.725 filename=/dev/nvme0n2 00:16:15.725 [job2] 00:16:15.725 filename=/dev/nvme0n3 00:16:15.725 [job3] 00:16:15.725 filename=/dev/nvme0n4 00:16:15.725 Could not set queue depth (nvme0n1) 00:16:15.725 Could not set queue depth (nvme0n2) 00:16:15.725 Could not set queue depth (nvme0n3) 00:16:15.725 Could not set queue depth (nvme0n4) 00:16:16.028 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:16.028 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:16.028 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:16.028 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:16.028 fio-3.35 00:16:16.028 Starting 4 threads 00:16:17.405 00:16:17.405 job0: (groupid=0, jobs=1): err= 0: pid=3240157: Mon Jul 15 07:52:01 2024 00:16:17.405 read: IOPS=22, BW=88.5KiB/s (90.7kB/s)(92.0KiB/1039msec) 00:16:17.405 slat (nsec): min=10218, max=27863, avg=21431.43, stdev=2989.64 00:16:17.405 clat (usec): min=40767, max=41180, avg=40975.34, stdev=86.01 00:16:17.405 lat (usec): min=40790, max=41201, avg=40996.77, stdev=85.52 00:16:17.405 clat percentiles (usec): 00:16:17.405 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:16:17.405 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:17.405 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:17.405 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:17.405 | 99.99th=[41157] 00:16:17.405 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:16:17.405 slat (nsec): min=10495, max=36583, avg=11757.71, stdev=1793.92 00:16:17.405 clat (usec): min=138, max=319, avg=171.69, stdev=15.19 00:16:17.405 lat (usec): min=152, max=355, avg=183.45, stdev=15.78 00:16:17.405 clat percentiles (usec): 00:16:17.405 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 161], 00:16:17.405 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:16:17.405 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 192], 00:16:17.405 | 99.00th=[ 239], 99.50th=[ 258], 99.90th=[ 318], 99.95th=[ 318], 00:16:17.405 | 99.99th=[ 318] 00:16:17.405 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:16:17.405 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:17.405 lat (usec) : 250=94.95%, 500=0.75% 00:16:17.405 lat (msec) : 50=4.30% 00:16:17.405 cpu : usr=0.67%, sys=0.67%, ctx=537, majf=0, minf=2 00:16:17.405 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:17.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.405 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.405 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:17.405 job1: (groupid=0, jobs=1): err= 0: pid=3240158: Mon Jul 15 07:52:01 2024 00:16:17.405 read: IOPS=135, BW=542KiB/s (555kB/s)(544KiB/1004msec) 00:16:17.405 slat (nsec): min=7274, max=23297, avg=10083.52, stdev=5222.63 00:16:17.405 clat (usec): min=185, max=41226, avg=6504.38, stdev=14788.88 00:16:17.405 lat (usec): min=193, max=41234, avg=6514.46, stdev=14793.91 00:16:17.405 clat percentiles (usec): 00:16:17.405 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 198], 00:16:17.405 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 210], 60.00th=[ 215], 00:16:17.405 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[41157], 95.00th=[41157], 00:16:17.405 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:17.405 | 99.99th=[41157] 00:16:17.405 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:16:17.405 slat (usec): min=9, max=10969, avg=32.03, stdev=484.34 00:16:17.405 clat (usec): min=128, max=468, avg=194.51, stdev=37.44 00:16:17.405 lat (usec): min=138, max=11270, avg=226.55, stdev=490.47 00:16:17.405 clat percentiles (usec): 00:16:17.405 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 165], 00:16:17.405 | 30.00th=[ 180], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 202], 00:16:17.405 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 235], 95.00th=[ 245], 00:16:17.405 | 99.00th=[ 285], 99.50th=[ 437], 99.90th=[ 469], 99.95th=[ 469], 00:16:17.405 | 99.99th=[ 469] 00:16:17.405 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:16:17.405 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:17.405 lat (usec) : 250=94.60%, 500=2.16% 00:16:17.405 lat (msec) : 50=3.24% 00:16:17.405 cpu : usr=0.30%, sys=0.70%, ctx=650, majf=0, minf=1 00:16:17.405 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:17.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.405 issued rwts: total=136,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.405 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:17.405 job2: (groupid=0, jobs=1): err= 0: pid=3240159: Mon Jul 15 07:52:01 2024 00:16:17.405 read: IOPS=72, BW=291KiB/s (298kB/s)(296KiB/1016msec) 00:16:17.405 slat (nsec): min=6203, max=22581, avg=11296.85, stdev=6581.88 00:16:17.405 clat (usec): min=188, max=41977, avg=12347.04, stdev=18773.24 00:16:17.405 lat (usec): min=195, max=41999, avg=12358.34, stdev=18779.14 00:16:17.405 clat percentiles (usec): 00:16:17.405 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 208], 00:16:17.405 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 229], 00:16:17.405 | 70.00th=[ 302], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:17.405 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:17.405 | 99.99th=[42206] 00:16:17.405 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:16:17.405 slat (nsec): min=8804, max=90310, avg=10377.75, stdev=3796.69 00:16:17.405 clat (usec): min=146, max=387, avg=184.79, stdev=25.84 00:16:17.405 lat (usec): min=156, max=477, avg=195.17, stdev=27.51 00:16:17.405 clat percentiles (usec): 00:16:17.405 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:16:17.405 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:16:17.405 | 70.00th=[ 188], 80.00th=[ 196], 90.00th=[ 219], 95.00th=[ 243], 00:16:17.405 | 99.00th=[ 260], 99.50th=[ 318], 99.90th=[ 388], 99.95th=[ 388], 00:16:17.405 | 99.99th=[ 388] 00:16:17.405 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:16:17.405 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:17.405 lat (usec) : 250=94.03%, 500=2.22% 00:16:17.405 lat (msec) : 50=3.75% 00:16:17.405 cpu : usr=0.39%, sys=0.39%, ctx=587, majf=0, minf=1 00:16:17.405 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:17.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.405 issued rwts: total=74,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.405 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:17.405 job3: (groupid=0, jobs=1): err= 0: pid=3240160: Mon Jul 15 07:52:01 2024 00:16:17.405 read: IOPS=23, BW=95.1KiB/s (97.4kB/s)(96.0KiB/1009msec) 00:16:17.405 slat (nsec): min=9573, max=25341, avg=21777.50, stdev=3395.89 00:16:17.405 clat (usec): min=400, max=41145, avg=37578.31, stdev=11447.15 00:16:17.405 lat (usec): min=424, max=41167, avg=37600.09, stdev=11446.27 00:16:17.405 clat percentiles (usec): 00:16:17.405 | 1.00th=[ 400], 5.00th=[ 429], 10.00th=[40633], 20.00th=[40633], 00:16:17.405 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:17.405 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:17.405 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:17.405 | 99.99th=[41157] 00:16:17.405 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:16:17.405 slat (nsec): min=9717, max=51955, avg=11392.15, stdev=2472.15 00:16:17.405 clat (usec): min=145, max=468, avg=193.48, stdev=32.54 00:16:17.405 lat (usec): min=155, max=480, avg=204.88, stdev=33.59 00:16:17.405 clat percentiles (usec): 00:16:17.405 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 172], 00:16:17.405 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 196], 00:16:17.405 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 219], 95.00th=[ 231], 00:16:17.405 | 99.00th=[ 343], 99.50th=[ 429], 99.90th=[ 469], 99.95th=[ 469], 00:16:17.405 | 99.99th=[ 469] 00:16:17.405 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:16:17.405 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:17.405 lat (usec) : 250=93.10%, 500=2.80% 00:16:17.405 lat (msec) : 50=4.10% 00:16:17.405 cpu : usr=0.40%, sys=0.50%, ctx=539, majf=0, minf=1 00:16:17.405 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:17.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.405 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.405 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:17.405 00:16:17.405 Run status group 0 (all jobs): 00:16:17.405 READ: bw=989KiB/s (1013kB/s), 88.5KiB/s-542KiB/s (90.7kB/s-555kB/s), io=1028KiB (1053kB), run=1004-1039msec 00:16:17.405 WRITE: bw=7885KiB/s (8074kB/s), 1971KiB/s-2040KiB/s (2018kB/s-2089kB/s), io=8192KiB (8389kB), run=1004-1039msec 00:16:17.405 00:16:17.405 Disk stats (read/write): 00:16:17.405 nvme0n1: ios=41/512, merge=0/0, ticks=1603/82, in_queue=1685, util=85.87% 00:16:17.405 nvme0n2: ios=174/512, merge=0/0, ticks=864/94, in_queue=958, util=91.37% 00:16:17.405 nvme0n3: ios=127/512, merge=0/0, ticks=824/96, in_queue=920, util=94.80% 00:16:17.405 nvme0n4: ios=77/512, merge=0/0, ticks=974/90, in_queue=1064, util=94.14% 00:16:17.405 07:52:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:17.405 [global] 00:16:17.405 thread=1 00:16:17.405 invalidate=1 00:16:17.405 rw=write 00:16:17.405 time_based=1 00:16:17.405 runtime=1 00:16:17.405 ioengine=libaio 00:16:17.405 direct=1 00:16:17.405 bs=4096 00:16:17.405 iodepth=128 00:16:17.405 norandommap=0 00:16:17.405 numjobs=1 00:16:17.405 00:16:17.405 verify_dump=1 00:16:17.405 verify_backlog=512 00:16:17.405 verify_state_save=0 00:16:17.405 do_verify=1 00:16:17.405 verify=crc32c-intel 00:16:17.405 [job0] 00:16:17.405 filename=/dev/nvme0n1 00:16:17.405 [job1] 00:16:17.405 filename=/dev/nvme0n2 00:16:17.405 [job2] 00:16:17.405 filename=/dev/nvme0n3 00:16:17.405 [job3] 00:16:17.405 filename=/dev/nvme0n4 00:16:17.405 Could not set queue depth (nvme0n1) 00:16:17.405 Could not set queue depth (nvme0n2) 00:16:17.405 Could not set queue depth (nvme0n3) 00:16:17.405 Could not set queue depth (nvme0n4) 00:16:17.405 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:17.405 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:17.405 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:17.406 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:17.406 fio-3.35 00:16:17.406 Starting 4 threads 00:16:18.782 00:16:18.782 job0: (groupid=0, jobs=1): err= 0: pid=3240528: Mon Jul 15 07:52:03 2024 00:16:18.782 read: IOPS=3908, BW=15.3MiB/s (16.0MB/s)(15.4MiB/1012msec) 00:16:18.782 slat (nsec): min=1286, max=17511k, avg=129223.44, stdev=854824.73 00:16:18.782 clat (usec): min=7552, max=54540, avg=17289.04, stdev=8599.04 00:16:18.782 lat (usec): min=7558, max=54559, avg=17418.26, stdev=8671.72 00:16:18.782 clat percentiles (usec): 00:16:18.782 | 1.00th=[ 7963], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10421], 00:16:18.782 | 30.00th=[11469], 40.00th=[12518], 50.00th=[14091], 60.00th=[15926], 00:16:18.782 | 70.00th=[18744], 80.00th=[23725], 90.00th=[30278], 95.00th=[36963], 00:16:18.782 | 99.00th=[44827], 99.50th=[46924], 99.90th=[46924], 99.95th=[53740], 00:16:18.782 | 99.99th=[54789] 00:16:18.782 write: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec); 0 zone resets 00:16:18.782 slat (usec): min=2, max=17914, avg=108.82, stdev=806.86 00:16:18.782 clat (usec): min=1125, max=42457, avg=14646.92, stdev=6598.36 00:16:18.782 lat (usec): min=1135, max=42494, avg=14755.74, stdev=6687.60 00:16:18.782 clat percentiles (usec): 00:16:18.782 | 1.00th=[ 8455], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[ 9896], 00:16:18.782 | 30.00th=[10159], 40.00th=[10814], 50.00th=[12125], 60.00th=[12780], 00:16:18.782 | 70.00th=[13698], 80.00th=[20579], 90.00th=[24249], 95.00th=[30540], 00:16:18.782 | 99.00th=[33817], 99.50th=[34341], 99.90th=[38536], 99.95th=[41681], 00:16:18.782 | 99.99th=[42206] 00:16:18.782 bw ( KiB/s): min=12288, max=20480, per=23.41%, avg=16384.00, stdev=5792.62, samples=2 00:16:18.782 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:16:18.782 lat (msec) : 2=0.02%, 10=16.77%, 20=57.07%, 50=26.11%, 100=0.02% 00:16:18.782 cpu : usr=2.97%, sys=4.85%, ctx=295, majf=0, minf=1 00:16:18.782 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:18.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:18.782 issued rwts: total=3955,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:18.782 job1: (groupid=0, jobs=1): err= 0: pid=3240529: Mon Jul 15 07:52:03 2024 00:16:18.782 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec) 00:16:18.782 slat (nsec): min=1235, max=19545k, avg=85232.38, stdev=723656.31 00:16:18.782 clat (usec): min=2766, max=49328, avg=12070.33, stdev=5719.81 00:16:18.782 lat (usec): min=2770, max=56358, avg=12155.57, stdev=5767.27 00:16:18.783 clat percentiles (usec): 00:16:18.783 | 1.00th=[ 5669], 5.00th=[ 7046], 10.00th=[ 8029], 20.00th=[ 9110], 00:16:18.783 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10421], 60.00th=[11207], 00:16:18.783 | 70.00th=[11863], 80.00th=[12911], 90.00th=[18482], 95.00th=[21365], 00:16:18.783 | 99.00th=[39584], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:16:18.783 | 99.99th=[49546] 00:16:18.783 write: IOPS=5244, BW=20.5MiB/s (21.5MB/s)(20.6MiB/1008msec); 0 zone resets 00:16:18.783 slat (usec): min=2, max=11461, avg=85.23, stdev=636.22 00:16:18.783 clat (usec): min=298, max=78719, avg=12521.89, stdev=10480.70 00:16:18.783 lat (usec): min=353, max=78731, avg=12607.11, stdev=10549.78 00:16:18.783 clat percentiles (usec): 00:16:18.783 | 1.00th=[ 2376], 5.00th=[ 4883], 10.00th=[ 5735], 20.00th=[ 7570], 00:16:18.783 | 30.00th=[ 8160], 40.00th=[ 8586], 50.00th=[ 9372], 60.00th=[10290], 00:16:18.783 | 70.00th=[10814], 80.00th=[14353], 90.00th=[23200], 95.00th=[33424], 00:16:18.783 | 99.00th=[64226], 99.50th=[69731], 99.90th=[79168], 99.95th=[79168], 00:16:18.783 | 99.99th=[79168] 00:16:18.783 bw ( KiB/s): min=18880, max=22392, per=29.48%, avg=20636.00, stdev=2483.36, samples=2 00:16:18.783 iops : min= 4720, max= 5598, avg=5159.00, stdev=620.84, samples=2 00:16:18.783 lat (usec) : 500=0.03%, 1000=0.18% 00:16:18.783 lat (msec) : 2=0.07%, 4=1.62%, 10=47.68%, 20=39.87%, 50=9.40% 00:16:18.783 lat (msec) : 100=1.14% 00:16:18.783 cpu : usr=3.67%, sys=5.96%, ctx=425, majf=0, minf=1 00:16:18.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:18.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:18.783 issued rwts: total=5120,5286,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:18.783 job2: (groupid=0, jobs=1): err= 0: pid=3240530: Mon Jul 15 07:52:03 2024 00:16:18.783 read: IOPS=3870, BW=15.1MiB/s (15.9MB/s)(15.2MiB/1005msec) 00:16:18.783 slat (nsec): min=1107, max=17792k, avg=101674.74, stdev=798415.76 00:16:18.783 clat (usec): min=4717, max=34771, avg=14446.36, stdev=5053.78 00:16:18.783 lat (usec): min=4722, max=35305, avg=14548.04, stdev=5097.23 00:16:18.783 clat percentiles (usec): 00:16:18.783 | 1.00th=[ 5080], 5.00th=[ 8160], 10.00th=[ 9634], 20.00th=[10552], 00:16:18.783 | 30.00th=[11207], 40.00th=[11731], 50.00th=[13698], 60.00th=[14353], 00:16:18.783 | 70.00th=[15926], 80.00th=[18744], 90.00th=[21365], 95.00th=[24773], 00:16:18.783 | 99.00th=[28181], 99.50th=[28181], 99.90th=[30278], 99.95th=[32900], 00:16:18.783 | 99.99th=[34866] 00:16:18.783 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:16:18.783 slat (nsec): min=1888, max=25912k, avg=126169.71, stdev=977272.92 00:16:18.783 clat (usec): min=358, max=104137, avg=17384.94, stdev=15808.50 00:16:18.783 lat (usec): min=573, max=104169, avg=17511.11, stdev=15916.17 00:16:18.783 clat percentiles (usec): 00:16:18.783 | 1.00th=[ 1844], 5.00th=[ 4883], 10.00th=[ 6652], 20.00th=[ 9634], 00:16:18.783 | 30.00th=[ 10945], 40.00th=[ 11469], 50.00th=[ 12780], 60.00th=[ 13960], 00:16:18.783 | 70.00th=[ 16909], 80.00th=[ 22676], 90.00th=[ 27657], 95.00th=[ 46400], 00:16:18.783 | 99.00th=[ 98042], 99.50th=[100140], 99.90th=[104334], 99.95th=[104334], 00:16:18.783 | 99.99th=[104334] 00:16:18.783 bw ( KiB/s): min=13032, max=19736, per=23.41%, avg=16384.00, stdev=4740.44, samples=2 00:16:18.783 iops : min= 3258, max= 4934, avg=4096.00, stdev=1185.11, samples=2 00:16:18.783 lat (usec) : 500=0.01%, 750=0.11%, 1000=0.26% 00:16:18.783 lat (msec) : 2=0.29%, 4=1.52%, 10=14.30%, 20=60.36%, 50=21.01% 00:16:18.783 lat (msec) : 100=1.89%, 250=0.25% 00:16:18.783 cpu : usr=3.49%, sys=4.38%, ctx=285, majf=0, minf=1 00:16:18.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:18.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:18.783 issued rwts: total=3890,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:18.783 job3: (groupid=0, jobs=1): err= 0: pid=3240531: Mon Jul 15 07:52:03 2024 00:16:18.783 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:16:18.783 slat (nsec): min=1168, max=11163k, avg=111609.53, stdev=694403.04 00:16:18.783 clat (usec): min=5131, max=41750, avg=14156.11, stdev=4919.31 00:16:18.783 lat (usec): min=5168, max=43051, avg=14267.72, stdev=4984.87 00:16:18.783 clat percentiles (usec): 00:16:18.783 | 1.00th=[ 7177], 5.00th=[ 9110], 10.00th=[10290], 20.00th=[11207], 00:16:18.783 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12649], 60.00th=[13173], 00:16:18.783 | 70.00th=[13698], 80.00th=[16909], 90.00th=[21103], 95.00th=[25035], 00:16:18.783 | 99.00th=[32637], 99.50th=[34866], 99.90th=[41681], 99.95th=[41681], 00:16:18.783 | 99.99th=[41681] 00:16:18.783 write: IOPS=4205, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1006msec); 0 zone resets 00:16:18.783 slat (usec): min=2, max=8577, avg=119.39, stdev=635.19 00:16:18.783 clat (usec): min=598, max=52482, avg=16346.89, stdev=9354.07 00:16:18.783 lat (usec): min=606, max=56726, avg=16466.28, stdev=9423.78 00:16:18.783 clat percentiles (usec): 00:16:18.783 | 1.00th=[ 3949], 5.00th=[ 6980], 10.00th=[ 9634], 20.00th=[10421], 00:16:18.783 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12518], 60.00th=[13698], 00:16:18.783 | 70.00th=[16581], 80.00th=[22414], 90.00th=[32900], 95.00th=[35914], 00:16:18.783 | 99.00th=[49021], 99.50th=[50594], 99.90th=[52691], 99.95th=[52691], 00:16:18.783 | 99.99th=[52691] 00:16:18.783 bw ( KiB/s): min=12168, max=20720, per=23.49%, avg=16444.00, stdev=6047.18, samples=2 00:16:18.783 iops : min= 3042, max= 5180, avg=4111.00, stdev=1511.79, samples=2 00:16:18.783 lat (usec) : 750=0.04% 00:16:18.783 lat (msec) : 4=0.62%, 10=11.49%, 20=69.69%, 50=17.75%, 100=0.41% 00:16:18.783 cpu : usr=4.18%, sys=4.08%, ctx=377, majf=0, minf=1 00:16:18.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:18.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:18.783 issued rwts: total=4096,4231,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:18.783 00:16:18.783 Run status group 0 (all jobs): 00:16:18.783 READ: bw=65.9MiB/s (69.1MB/s), 15.1MiB/s-19.8MiB/s (15.9MB/s-20.8MB/s), io=66.6MiB (69.9MB), run=1005-1012msec 00:16:18.783 WRITE: bw=68.4MiB/s (71.7MB/s), 15.8MiB/s-20.5MiB/s (16.6MB/s-21.5MB/s), io=69.2MiB (72.5MB), run=1005-1012msec 00:16:18.783 00:16:18.783 Disk stats (read/write): 00:16:18.783 nvme0n1: ios=3493/3584, merge=0/0, ticks=24868/24565, in_queue=49433, util=87.17% 00:16:18.783 nvme0n2: ios=4524/4608, merge=0/0, ticks=48600/41797, in_queue=90397, util=90.87% 00:16:18.783 nvme0n3: ios=3092/3371, merge=0/0, ticks=37231/49220, in_queue=86451, util=97.61% 00:16:18.783 nvme0n4: ios=3629/3855, merge=0/0, ticks=30922/33886, in_queue=64808, util=97.91% 00:16:18.783 07:52:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:18.783 [global] 00:16:18.783 thread=1 00:16:18.783 invalidate=1 00:16:18.783 rw=randwrite 00:16:18.783 time_based=1 00:16:18.783 runtime=1 00:16:18.783 ioengine=libaio 00:16:18.783 direct=1 00:16:18.783 bs=4096 00:16:18.783 iodepth=128 00:16:18.783 norandommap=0 00:16:18.783 numjobs=1 00:16:18.783 00:16:18.783 verify_dump=1 00:16:18.783 verify_backlog=512 00:16:18.783 verify_state_save=0 00:16:18.783 do_verify=1 00:16:18.783 verify=crc32c-intel 00:16:18.783 [job0] 00:16:18.783 filename=/dev/nvme0n1 00:16:18.783 [job1] 00:16:18.783 filename=/dev/nvme0n2 00:16:18.783 [job2] 00:16:18.783 filename=/dev/nvme0n3 00:16:18.783 [job3] 00:16:18.783 filename=/dev/nvme0n4 00:16:18.783 Could not set queue depth (nvme0n1) 00:16:18.783 Could not set queue depth (nvme0n2) 00:16:18.783 Could not set queue depth (nvme0n3) 00:16:18.783 Could not set queue depth (nvme0n4) 00:16:19.041 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:19.041 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:19.041 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:19.041 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:19.041 fio-3.35 00:16:19.041 Starting 4 threads 00:16:20.424 00:16:20.424 job0: (groupid=0, jobs=1): err= 0: pid=3240908: Mon Jul 15 07:52:04 2024 00:16:20.424 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:16:20.424 slat (nsec): min=1470, max=15056k, avg=117265.32, stdev=826741.86 00:16:20.424 clat (usec): min=3198, max=51335, avg=13655.84, stdev=5864.97 00:16:20.424 lat (usec): min=3204, max=51345, avg=13773.11, stdev=5941.00 00:16:20.424 clat percentiles (usec): 00:16:20.424 | 1.00th=[ 4817], 5.00th=[ 7242], 10.00th=[ 8225], 20.00th=[ 9110], 00:16:20.424 | 30.00th=[10159], 40.00th=[10814], 50.00th=[12256], 60.00th=[14746], 00:16:20.424 | 70.00th=[15533], 80.00th=[16909], 90.00th=[20317], 95.00th=[23725], 00:16:20.424 | 99.00th=[35914], 99.50th=[41157], 99.90th=[46400], 99.95th=[46400], 00:16:20.424 | 99.99th=[51119] 00:16:20.424 write: IOPS=4465, BW=17.4MiB/s (18.3MB/s)(17.6MiB/1010msec); 0 zone resets 00:16:20.424 slat (usec): min=2, max=13074, avg=109.42, stdev=600.34 00:16:20.424 clat (msec): min=2, max=101, avg=15.98, stdev=14.41 00:16:20.424 lat (msec): min=2, max=101, avg=16.09, stdev=14.50 00:16:20.424 clat percentiles (msec): 00:16:20.424 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 9], 00:16:20.424 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 15], 00:16:20.424 | 70.00th=[ 17], 80.00th=[ 20], 90.00th=[ 25], 95.00th=[ 37], 00:16:20.424 | 99.00th=[ 91], 99.50th=[ 94], 99.90th=[ 102], 99.95th=[ 102], 00:16:20.424 | 99.99th=[ 102] 00:16:20.424 bw ( KiB/s): min=16384, max=18680, per=26.33%, avg=17532.00, stdev=1623.52, samples=2 00:16:20.424 iops : min= 4096, max= 4670, avg=4383.00, stdev=405.88, samples=2 00:16:20.424 lat (msec) : 4=1.17%, 10=30.32%, 20=53.25%, 50=13.33%, 100=1.77% 00:16:20.424 lat (msec) : 250=0.16% 00:16:20.424 cpu : usr=3.27%, sys=4.66%, ctx=582, majf=0, minf=1 00:16:20.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:20.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:20.424 issued rwts: total=4096,4510,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.424 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:20.424 job1: (groupid=0, jobs=1): err= 0: pid=3240909: Mon Jul 15 07:52:04 2024 00:16:20.424 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:16:20.424 slat (nsec): min=1092, max=26597k, avg=139092.15, stdev=1241729.18 00:16:20.424 clat (usec): min=1717, max=52185, avg=17590.77, stdev=8737.26 00:16:20.424 lat (usec): min=1726, max=52207, avg=17729.87, stdev=8842.31 00:16:20.424 clat percentiles (usec): 00:16:20.424 | 1.00th=[ 2999], 5.00th=[ 6783], 10.00th=[ 9110], 20.00th=[11338], 00:16:20.424 | 30.00th=[13042], 40.00th=[14353], 50.00th=[15401], 60.00th=[16909], 00:16:20.424 | 70.00th=[19792], 80.00th=[22152], 90.00th=[27657], 95.00th=[34866], 00:16:20.424 | 99.00th=[50070], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:16:20.424 | 99.99th=[52167] 00:16:20.424 write: IOPS=3914, BW=15.3MiB/s (16.0MB/s)(15.5MiB/1011msec); 0 zone resets 00:16:20.424 slat (nsec): min=1953, max=19612k, avg=107699.43, stdev=778976.86 00:16:20.424 clat (usec): min=441, max=57400, avg=16493.44, stdev=10278.03 00:16:20.424 lat (usec): min=565, max=57412, avg=16601.14, stdev=10353.99 00:16:20.424 clat percentiles (usec): 00:16:20.424 | 1.00th=[ 4948], 5.00th=[ 5407], 10.00th=[ 7439], 20.00th=[ 8455], 00:16:20.424 | 30.00th=[10028], 40.00th=[11994], 50.00th=[14091], 60.00th=[15401], 00:16:20.424 | 70.00th=[16581], 80.00th=[23200], 90.00th=[27395], 95.00th=[39060], 00:16:20.424 | 99.00th=[52691], 99.50th=[54264], 99.90th=[57410], 99.95th=[57410], 00:16:20.424 | 99.99th=[57410] 00:16:20.424 bw ( KiB/s): min=14264, max=16384, per=23.01%, avg=15324.00, stdev=1499.07, samples=2 00:16:20.424 iops : min= 3566, max= 4096, avg=3831.00, stdev=374.77, samples=2 00:16:20.424 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.03% 00:16:20.424 lat (msec) : 2=0.30%, 4=0.65%, 10=22.09%, 20=49.76%, 50=25.03% 00:16:20.424 lat (msec) : 100=2.08% 00:16:20.424 cpu : usr=2.57%, sys=4.26%, ctx=365, majf=0, minf=1 00:16:20.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:20.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:20.424 issued rwts: total=3584,3958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.424 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:20.424 job2: (groupid=0, jobs=1): err= 0: pid=3240910: Mon Jul 15 07:52:04 2024 00:16:20.424 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:16:20.424 slat (nsec): min=1328, max=21154k, avg=109095.49, stdev=788878.74 00:16:20.424 clat (usec): min=4025, max=46240, avg=14129.59, stdev=4971.15 00:16:20.424 lat (usec): min=4038, max=61616, avg=14238.68, stdev=5050.16 00:16:20.424 clat percentiles (usec): 00:16:20.424 | 1.00th=[ 6718], 5.00th=[ 9110], 10.00th=[10028], 20.00th=[10683], 00:16:20.424 | 30.00th=[11731], 40.00th=[12256], 50.00th=[12780], 60.00th=[13173], 00:16:20.424 | 70.00th=[14222], 80.00th=[17171], 90.00th=[20317], 95.00th=[23462], 00:16:20.424 | 99.00th=[34341], 99.50th=[34866], 99.90th=[40633], 99.95th=[40633], 00:16:20.424 | 99.99th=[46400] 00:16:20.424 write: IOPS=4750, BW=18.6MiB/s (19.5MB/s)(18.7MiB/1006msec); 0 zone resets 00:16:20.424 slat (usec): min=2, max=36354, avg=98.60, stdev=791.60 00:16:20.424 clat (usec): min=575, max=48156, avg=13050.60, stdev=6758.81 00:16:20.424 lat (usec): min=2489, max=48167, avg=13149.20, stdev=6801.59 00:16:20.424 clat percentiles (usec): 00:16:20.424 | 1.00th=[ 3687], 5.00th=[ 6783], 10.00th=[ 8586], 20.00th=[ 9372], 00:16:20.424 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11338], 60.00th=[11863], 00:16:20.424 | 70.00th=[12780], 80.00th=[15533], 90.00th=[18482], 95.00th=[23987], 00:16:20.424 | 99.00th=[45876], 99.50th=[46924], 99.90th=[47449], 99.95th=[47449], 00:16:20.424 | 99.99th=[47973] 00:16:20.424 bw ( KiB/s): min=16744, max=20464, per=27.94%, avg=18604.00, stdev=2630.44, samples=2 00:16:20.424 iops : min= 4186, max= 5116, avg=4651.00, stdev=657.61, samples=2 00:16:20.424 lat (usec) : 750=0.01% 00:16:20.424 lat (msec) : 4=0.70%, 10=17.64%, 20=71.51%, 50=10.13% 00:16:20.424 cpu : usr=3.68%, sys=5.47%, ctx=449, majf=0, minf=1 00:16:20.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:16:20.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:20.424 issued rwts: total=4608,4779,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.424 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:20.424 job3: (groupid=0, jobs=1): err= 0: pid=3240911: Mon Jul 15 07:52:04 2024 00:16:20.424 read: IOPS=3493, BW=13.6MiB/s (14.3MB/s)(13.7MiB/1006msec) 00:16:20.424 slat (usec): min=2, max=16732, avg=140.89, stdev=1080.79 00:16:20.424 clat (usec): min=1767, max=55939, avg=18972.03, stdev=9319.27 00:16:20.424 lat (usec): min=3565, max=55962, avg=19112.92, stdev=9428.68 00:16:20.424 clat percentiles (usec): 00:16:20.424 | 1.00th=[ 6325], 5.00th=[ 7046], 10.00th=[ 8586], 20.00th=[11207], 00:16:20.424 | 30.00th=[11731], 40.00th=[13960], 50.00th=[16319], 60.00th=[21103], 00:16:20.424 | 70.00th=[23462], 80.00th=[25560], 90.00th=[34866], 95.00th=[38011], 00:16:20.424 | 99.00th=[39584], 99.50th=[43254], 99.90th=[47973], 99.95th=[50070], 00:16:20.424 | 99.99th=[55837] 00:16:20.424 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:16:20.424 slat (usec): min=2, max=23032, avg=120.93, stdev=988.48 00:16:20.424 clat (usec): min=431, max=81417, avg=16904.00, stdev=13273.99 00:16:20.424 lat (usec): min=542, max=81424, avg=17024.94, stdev=13338.96 00:16:20.424 clat percentiles (usec): 00:16:20.424 | 1.00th=[ 1942], 5.00th=[ 4555], 10.00th=[ 5997], 20.00th=[ 8979], 00:16:20.424 | 30.00th=[10683], 40.00th=[11994], 50.00th=[12780], 60.00th=[14091], 00:16:20.424 | 70.00th=[16712], 80.00th=[26084], 90.00th=[28705], 95.00th=[36439], 00:16:20.424 | 99.00th=[73925], 99.50th=[77071], 99.90th=[81265], 99.95th=[81265], 00:16:20.424 | 99.99th=[81265] 00:16:20.424 bw ( KiB/s): min=14208, max=14464, per=21.53%, avg=14336.00, stdev=181.02, samples=2 00:16:20.424 iops : min= 3552, max= 3616, avg=3584.00, stdev=45.25, samples=2 00:16:20.424 lat (usec) : 500=0.03%, 750=0.13%, 1000=0.06% 00:16:20.424 lat (msec) : 2=0.38%, 4=1.37%, 10=14.99%, 20=48.39%, 50=32.74% 00:16:20.424 lat (msec) : 100=1.92% 00:16:20.424 cpu : usr=2.39%, sys=4.78%, ctx=193, majf=0, minf=1 00:16:20.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:20.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:20.424 issued rwts: total=3514,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.424 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:20.424 00:16:20.424 Run status group 0 (all jobs): 00:16:20.424 READ: bw=61.1MiB/s (64.0MB/s), 13.6MiB/s-17.9MiB/s (14.3MB/s-18.8MB/s), io=61.7MiB (64.7MB), run=1006-1011msec 00:16:20.424 WRITE: bw=65.0MiB/s (68.2MB/s), 13.9MiB/s-18.6MiB/s (14.6MB/s-19.5MB/s), io=65.7MiB (68.9MB), run=1006-1011msec 00:16:20.424 00:16:20.424 Disk stats (read/write): 00:16:20.424 nvme0n1: ios=3122/3584, merge=0/0, ticks=42915/61584, in_queue=104499, util=90.18% 00:16:20.424 nvme0n2: ios=3095/3584, merge=0/0, ticks=50752/45073, in_queue=95825, util=93.30% 00:16:20.424 nvme0n3: ios=3673/4096, merge=0/0, ticks=45229/39703, in_queue=84932, util=97.82% 00:16:20.424 nvme0n4: ios=2979/3072, merge=0/0, ticks=35874/36471, in_queue=72345, util=96.97% 00:16:20.425 07:52:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:20.425 07:52:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3241140 00:16:20.425 07:52:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:20.425 07:52:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:20.425 [global] 00:16:20.425 thread=1 00:16:20.425 invalidate=1 00:16:20.425 rw=read 00:16:20.425 time_based=1 00:16:20.425 runtime=10 00:16:20.425 ioengine=libaio 00:16:20.425 direct=1 00:16:20.425 bs=4096 00:16:20.425 iodepth=1 00:16:20.425 norandommap=1 00:16:20.425 numjobs=1 00:16:20.425 00:16:20.425 [job0] 00:16:20.425 filename=/dev/nvme0n1 00:16:20.425 [job1] 00:16:20.425 filename=/dev/nvme0n2 00:16:20.425 [job2] 00:16:20.425 filename=/dev/nvme0n3 00:16:20.425 [job3] 00:16:20.425 filename=/dev/nvme0n4 00:16:20.425 Could not set queue depth (nvme0n1) 00:16:20.425 Could not set queue depth (nvme0n2) 00:16:20.425 Could not set queue depth (nvme0n3) 00:16:20.425 Could not set queue depth (nvme0n4) 00:16:20.680 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:20.680 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:20.680 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:20.680 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:20.680 fio-3.35 00:16:20.680 Starting 4 threads 00:16:23.196 07:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:23.451 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=42606592, buflen=4096 00:16:23.451 fio: pid=3241280, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:23.451 07:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:23.707 07:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:23.707 07:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:23.707 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=44843008, buflen=4096 00:16:23.707 fio: pid=3241279, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:23.707 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=331776, buflen=4096 00:16:23.707 fio: pid=3241277, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:23.707 07:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:23.707 07:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:23.963 07:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:23.963 07:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:23.963 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=335872, buflen=4096 00:16:23.963 fio: pid=3241278, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:23.963 00:16:23.963 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3241277: Mon Jul 15 07:52:08 2024 00:16:23.963 read: IOPS=26, BW=105KiB/s (108kB/s)(324KiB/3081msec) 00:16:23.963 slat (usec): min=9, max=10872, avg=199.02, stdev=1262.31 00:16:23.963 clat (usec): min=291, max=47599, avg=37567.13, stdev=11503.93 00:16:23.963 lat (usec): min=312, max=51772, avg=37768.30, stdev=11627.20 00:16:23.964 clat percentiles (usec): 00:16:23.964 | 1.00th=[ 293], 5.00th=[ 306], 10.00th=[40633], 20.00th=[40633], 00:16:23.964 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:23.964 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:23.964 | 99.00th=[47449], 99.50th=[47449], 99.90th=[47449], 99.95th=[47449], 00:16:23.964 | 99.99th=[47449] 00:16:23.964 bw ( KiB/s): min= 104, max= 112, per=0.41%, avg=108.80, stdev= 4.38, samples=5 00:16:23.964 iops : min= 26, max= 28, avg=27.20, stdev= 1.10, samples=5 00:16:23.964 lat (usec) : 500=7.32% 00:16:23.964 lat (msec) : 2=1.22%, 50=90.24% 00:16:23.964 cpu : usr=0.00%, sys=0.13%, ctx=84, majf=0, minf=1 00:16:23.964 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:23.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.964 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.964 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:23.964 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:23.964 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3241278: Mon Jul 15 07:52:08 2024 00:16:23.964 read: IOPS=25, BW=99.5KiB/s (102kB/s)(328KiB/3298msec) 00:16:23.964 slat (usec): min=9, max=22258, avg=385.33, stdev=2578.52 00:16:23.964 clat (usec): min=493, max=47530, avg=39690.70, stdev=7717.95 00:16:23.964 lat (usec): min=516, max=63551, avg=40080.38, stdev=8224.84 00:16:23.964 clat percentiles (usec): 00:16:23.964 | 1.00th=[ 494], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:23.964 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:23.964 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:16:23.964 | 99.00th=[47449], 99.50th=[47449], 99.90th=[47449], 99.95th=[47449], 00:16:23.964 | 99.99th=[47449] 00:16:23.964 bw ( KiB/s): min= 92, max= 112, per=0.38%, avg=99.33, stdev= 7.34, samples=6 00:16:23.964 iops : min= 23, max= 28, avg=24.83, stdev= 1.83, samples=6 00:16:23.964 lat (usec) : 500=2.41%, 750=1.20% 00:16:23.964 lat (msec) : 50=95.18% 00:16:23.964 cpu : usr=0.00%, sys=0.12%, ctx=86, majf=0, minf=1 00:16:23.964 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:23.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.964 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.964 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:23.964 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:23.964 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3241279: Mon Jul 15 07:52:08 2024 00:16:23.964 read: IOPS=3766, BW=14.7MiB/s (15.4MB/s)(42.8MiB/2907msec) 00:16:23.964 slat (nsec): min=6159, max=78110, avg=7058.45, stdev=1105.06 00:16:23.964 clat (usec): min=186, max=41930, avg=255.43, stdev=879.60 00:16:23.964 lat (usec): min=193, max=42008, avg=262.48, stdev=880.11 00:16:23.964 clat percentiles (usec): 00:16:23.964 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 225], 00:16:23.964 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 239], 00:16:23.964 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 265], 00:16:23.964 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 310], 99.95th=[ 3589], 00:16:23.964 | 99.99th=[41681] 00:16:23.964 bw ( KiB/s): min=15592, max=17168, per=62.99%, avg=16435.20, stdev=614.46, samples=5 00:16:23.964 iops : min= 3898, max= 4292, avg=4108.80, stdev=153.61, samples=5 00:16:23.964 lat (usec) : 250=81.43%, 500=18.50% 00:16:23.964 lat (msec) : 4=0.01%, 50=0.05% 00:16:23.964 cpu : usr=1.17%, sys=3.13%, ctx=10951, majf=0, minf=1 00:16:23.964 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:23.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.964 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.964 issued rwts: total=10949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:23.964 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:23.964 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3241280: Mon Jul 15 07:52:08 2024 00:16:23.964 read: IOPS=3855, BW=15.1MiB/s (15.8MB/s)(40.6MiB/2698msec) 00:16:23.964 slat (nsec): min=6245, max=29967, avg=7173.06, stdev=933.71 00:16:23.964 clat (usec): min=213, max=461, avg=248.58, stdev=11.41 00:16:23.964 lat (usec): min=220, max=491, avg=255.75, stdev=11.53 00:16:23.964 clat percentiles (usec): 00:16:23.964 | 1.00th=[ 227], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 239], 00:16:23.964 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 251], 00:16:23.964 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 269], 00:16:23.964 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 297], 99.95th=[ 326], 00:16:23.964 | 99.99th=[ 441] 00:16:23.964 bw ( KiB/s): min=15536, max=15648, per=59.76%, avg=15592.00, stdev=47.33, samples=5 00:16:23.964 iops : min= 3884, max= 3912, avg=3898.00, stdev=11.83, samples=5 00:16:23.964 lat (usec) : 250=56.64%, 500=43.35% 00:16:23.964 cpu : usr=0.93%, sys=3.49%, ctx=10403, majf=0, minf=2 00:16:23.964 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:23.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.964 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.964 issued rwts: total=10403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:23.964 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:23.964 00:16:23.964 Run status group 0 (all jobs): 00:16:23.964 READ: bw=25.5MiB/s (26.7MB/s), 99.5KiB/s-15.1MiB/s (102kB/s-15.8MB/s), io=84.0MiB (88.1MB), run=2698-3298msec 00:16:23.964 00:16:23.964 Disk stats (read/write): 00:16:23.964 nvme0n1: ios=76/0, merge=0/0, ticks=2831/0, in_queue=2831, util=95.29% 00:16:23.964 nvme0n2: ios=77/0, merge=0/0, ticks=3051/0, in_queue=3051, util=95.24% 00:16:23.964 nvme0n3: ios=10947/0, merge=0/0, ticks=2704/0, in_queue=2704, util=96.52% 00:16:23.964 nvme0n4: ios=10149/0, merge=0/0, ticks=2476/0, in_queue=2476, util=96.45% 00:16:24.220 07:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:24.221 07:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:24.477 07:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:24.477 07:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:24.477 07:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:24.477 07:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:24.733 07:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:24.733 07:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:24.990 07:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:24.990 07:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3241140 00:16:24.990 07:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:24.990 07:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:24.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.990 07:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:24.990 07:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:16:24.990 07:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:24.990 07:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:24.990 07:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:24.990 07:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:24.990 07:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:16:24.990 07:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:24.990 07:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:24.990 nvmf hotplug test: fio failed as expected 00:16:24.990 07:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:25.247 rmmod nvme_tcp 00:16:25.247 rmmod nvme_fabrics 00:16:25.247 rmmod nvme_keyring 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3238333 ']' 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3238333 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 3238333 ']' 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 3238333 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3238333 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3238333' 00:16:25.247 killing process with pid 3238333 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 3238333 00:16:25.247 07:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 3238333 00:16:25.507 07:52:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:25.507 07:52:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:25.507 07:52:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:25.507 07:52:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:25.507 07:52:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:25.507 07:52:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.507 07:52:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.507 07:52:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.041 07:52:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:28.041 00:16:28.041 real 0m26.693s 00:16:28.041 user 1m46.508s 00:16:28.041 sys 0m8.062s 00:16:28.041 07:52:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:28.041 07:52:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.041 ************************************ 00:16:28.041 END TEST nvmf_fio_target 00:16:28.041 ************************************ 00:16:28.041 07:52:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:28.041 07:52:12 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:28.041 07:52:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:28.041 07:52:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:28.041 07:52:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:28.041 ************************************ 00:16:28.041 START TEST nvmf_bdevio 00:16:28.041 ************************************ 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:28.041 * Looking for test storage... 00:16:28.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:16:28.041 07:52:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:33.311 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:33.312 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:33.312 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:33.312 Found net devices under 0000:86:00.0: cvl_0_0 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:33.312 Found net devices under 0000:86:00.1: cvl_0_1 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:33.312 07:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:33.312 07:52:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:33.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:33.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:16:33.571 00:16:33.571 --- 10.0.0.2 ping statistics --- 00:16:33.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.571 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:33.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:33.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:16:33.571 00:16:33.571 --- 10.0.0.1 ping statistics --- 00:16:33.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.571 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3245588 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3245588 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 3245588 ']' 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:33.571 07:52:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:33.571 [2024-07-15 07:52:18.279799] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:16:33.572 [2024-07-15 07:52:18.279844] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.572 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.830 [2024-07-15 07:52:18.347579] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:33.830 [2024-07-15 07:52:18.420928] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.830 [2024-07-15 07:52:18.420970] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.830 [2024-07-15 07:52:18.420976] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.830 [2024-07-15 07:52:18.420982] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.830 [2024-07-15 07:52:18.420987] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.830 [2024-07-15 07:52:18.421100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:33.830 [2024-07-15 07:52:18.421204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:33.830 [2024-07-15 07:52:18.421314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:33.830 [2024-07-15 07:52:18.421314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:34.399 07:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:34.399 07:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:16:34.399 07:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:34.399 07:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:34.399 07:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:34.399 07:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.399 07:52:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:34.399 07:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.399 07:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:34.399 [2024-07-15 07:52:19.129076] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:34.399 07:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.399 07:52:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:34.399 07:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.399 07:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:34.658 Malloc0 00:16:34.658 07:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.658 07:52:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:34.658 07:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.658 07:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:34.658 07:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.658 07:52:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:34.658 07:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.658 07:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:34.658 07:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.658 07:52:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:34.658 07:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.658 07:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:34.658 [2024-07-15 07:52:19.180552] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.658 07:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.658 07:52:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:34.658 07:52:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:34.658 07:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:16:34.658 07:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:16:34.658 07:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:34.658 07:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:34.658 { 00:16:34.658 "params": { 00:16:34.658 "name": "Nvme$subsystem", 00:16:34.658 "trtype": "$TEST_TRANSPORT", 00:16:34.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.658 "adrfam": "ipv4", 00:16:34.658 "trsvcid": "$NVMF_PORT", 00:16:34.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.658 "hdgst": ${hdgst:-false}, 00:16:34.658 "ddgst": ${ddgst:-false} 00:16:34.658 }, 00:16:34.658 "method": "bdev_nvme_attach_controller" 00:16:34.658 } 00:16:34.658 EOF 00:16:34.658 )") 00:16:34.658 07:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:16:34.658 07:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:16:34.658 07:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:16:34.658 07:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:34.658 "params": { 00:16:34.658 "name": "Nvme1", 00:16:34.658 "trtype": "tcp", 00:16:34.658 "traddr": "10.0.0.2", 00:16:34.658 "adrfam": "ipv4", 00:16:34.658 "trsvcid": "4420", 00:16:34.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:34.658 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:34.658 "hdgst": false, 00:16:34.658 "ddgst": false 00:16:34.658 }, 00:16:34.658 "method": "bdev_nvme_attach_controller" 00:16:34.658 }' 00:16:34.658 [2024-07-15 07:52:19.229177] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:16:34.658 [2024-07-15 07:52:19.229221] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3245769 ] 00:16:34.658 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.658 [2024-07-15 07:52:19.295150] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:34.658 [2024-07-15 07:52:19.373166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.658 [2024-07-15 07:52:19.373273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.658 [2024-07-15 07:52:19.373273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.225 I/O targets: 00:16:35.225 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:35.225 00:16:35.225 00:16:35.225 CUnit - A unit testing framework for C - Version 2.1-3 00:16:35.225 http://cunit.sourceforge.net/ 00:16:35.225 00:16:35.225 00:16:35.225 Suite: bdevio tests on: Nvme1n1 00:16:35.225 Test: blockdev write read block ...passed 00:16:35.225 Test: blockdev write zeroes read block ...passed 00:16:35.225 Test: blockdev write zeroes read no split ...passed 00:16:35.225 Test: blockdev write zeroes read split ...passed 00:16:35.225 Test: blockdev write zeroes read split partial ...passed 00:16:35.225 Test: blockdev reset ...[2024-07-15 07:52:19.888735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:35.225 [2024-07-15 07:52:19.888800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbae6d0 (9): Bad file descriptor 00:16:35.225 [2024-07-15 07:52:19.908729] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:35.225 passed 00:16:35.225 Test: blockdev write read 8 blocks ...passed 00:16:35.225 Test: blockdev write read size > 128k ...passed 00:16:35.225 Test: blockdev write read invalid size ...passed 00:16:35.225 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:35.225 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:35.225 Test: blockdev write read max offset ...passed 00:16:35.483 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:35.483 Test: blockdev writev readv 8 blocks ...passed 00:16:35.483 Test: blockdev writev readv 30 x 1block ...passed 00:16:35.483 Test: blockdev writev readv block ...passed 00:16:35.483 Test: blockdev writev readv size > 128k ...passed 00:16:35.483 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:35.483 Test: blockdev comparev and writev ...[2024-07-15 07:52:20.121187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.483 [2024-07-15 07:52:20.121214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:35.483 [2024-07-15 07:52:20.121232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.483 [2024-07-15 07:52:20.121241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:35.483 [2024-07-15 07:52:20.121483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.483 [2024-07-15 07:52:20.121494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:35.483 [2024-07-15 07:52:20.121506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.483 [2024-07-15 07:52:20.121513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:35.483 [2024-07-15 07:52:20.121762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.483 [2024-07-15 07:52:20.121772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:35.483 [2024-07-15 07:52:20.121784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.483 [2024-07-15 07:52:20.121792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:35.483 [2024-07-15 07:52:20.122037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.483 [2024-07-15 07:52:20.122048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:35.483 [2024-07-15 07:52:20.122059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.483 [2024-07-15 07:52:20.122067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:35.483 passed 00:16:35.483 Test: blockdev nvme passthru rw ...passed 00:16:35.483 Test: blockdev nvme passthru vendor specific ...[2024-07-15 07:52:20.203669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:35.483 [2024-07-15 07:52:20.203691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:35.483 [2024-07-15 07:52:20.203805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:35.483 [2024-07-15 07:52:20.203815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:35.483 [2024-07-15 07:52:20.203925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:35.483 [2024-07-15 07:52:20.203934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:35.483 [2024-07-15 07:52:20.204042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:35.483 [2024-07-15 07:52:20.204052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:35.483 passed 00:16:35.483 Test: blockdev nvme admin passthru ...passed 00:16:35.742 Test: blockdev copy ...passed 00:16:35.742 00:16:35.742 Run Summary: Type Total Ran Passed Failed Inactive 00:16:35.742 suites 1 1 n/a 0 0 00:16:35.742 tests 23 23 23 0 0 00:16:35.742 asserts 152 152 152 0 n/a 00:16:35.742 00:16:35.742 Elapsed time = 1.147 seconds 00:16:35.742 07:52:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:35.742 07:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.742 07:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:35.742 07:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.742 07:52:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:35.742 07:52:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:16:35.742 07:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:35.742 07:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:16:35.742 07:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:35.742 07:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:16:35.742 07:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:35.742 07:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:35.742 rmmod nvme_tcp 00:16:35.742 rmmod nvme_fabrics 00:16:35.742 rmmod nvme_keyring 00:16:35.742 07:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:35.742 07:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:16:35.742 07:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:16:35.742 07:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3245588 ']' 00:16:35.742 07:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3245588 00:16:35.742 07:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 3245588 ']' 00:16:35.742 07:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 3245588 00:16:35.742 07:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:16:35.742 07:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:35.742 07:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3245588 00:16:36.001 07:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:16:36.001 07:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:16:36.001 07:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3245588' 00:16:36.001 killing process with pid 3245588 00:16:36.001 07:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 3245588 00:16:36.001 07:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 3245588 00:16:36.001 07:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:36.001 07:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:36.001 07:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:36.001 07:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:36.001 07:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:36.001 07:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.001 07:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.001 07:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.658 07:52:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:38.658 00:16:38.658 real 0m10.478s 00:16:38.658 user 0m13.245s 00:16:38.658 sys 0m4.947s 00:16:38.658 07:52:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:38.658 07:52:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:38.658 ************************************ 00:16:38.658 END TEST nvmf_bdevio 00:16:38.658 ************************************ 00:16:38.658 07:52:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:38.658 07:52:22 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:38.658 07:52:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:38.658 07:52:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:38.658 07:52:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:38.658 ************************************ 00:16:38.658 START TEST nvmf_auth_target 00:16:38.658 ************************************ 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:38.658 * Looking for test storage... 00:16:38.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.658 07:52:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.659 07:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.659 07:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:38.659 07:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:38.659 07:52:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:38.659 07:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.934 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:43.934 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:43.934 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:43.934 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:43.934 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:43.934 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:43.934 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:43.934 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:43.934 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:43.934 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:43.934 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:43.934 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:43.934 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:43.935 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:43.935 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:43.935 Found net devices under 0000:86:00.0: cvl_0_0 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:43.935 Found net devices under 0000:86:00.1: cvl_0_1 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:43.935 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:44.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:16:44.194 00:16:44.194 --- 10.0.0.2 ping statistics --- 00:16:44.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.194 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:44.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:16:44.194 00:16:44.194 --- 10.0.0.1 ping statistics --- 00:16:44.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.194 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3249510 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3249510 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3249510 ']' 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:44.194 07:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.128 07:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3249639 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f5d54d567ceefc6688a2a73d9295054766800584926cc187 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.RrX 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f5d54d567ceefc6688a2a73d9295054766800584926cc187 0 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f5d54d567ceefc6688a2a73d9295054766800584926cc187 0 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f5d54d567ceefc6688a2a73d9295054766800584926cc187 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.RrX 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.RrX 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.RrX 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f3f73c1c0b43349ecfd22806e8fb4b6c7fc968d4df04989ca632e6ac0e961b27 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.IpK 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f3f73c1c0b43349ecfd22806e8fb4b6c7fc968d4df04989ca632e6ac0e961b27 3 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f3f73c1c0b43349ecfd22806e8fb4b6c7fc968d4df04989ca632e6ac0e961b27 3 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f3f73c1c0b43349ecfd22806e8fb4b6c7fc968d4df04989ca632e6ac0e961b27 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.IpK 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.IpK 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.IpK 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=832ca5b47993a976ef2df502bd49f516 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.qhV 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 832ca5b47993a976ef2df502bd49f516 1 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 832ca5b47993a976ef2df502bd49f516 1 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=832ca5b47993a976ef2df502bd49f516 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.qhV 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.qhV 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.qhV 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=91332bedcf91be99418413870f2bb61480c1c3614635df00 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.EEU 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 91332bedcf91be99418413870f2bb61480c1c3614635df00 2 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 91332bedcf91be99418413870f2bb61480c1c3614635df00 2 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=91332bedcf91be99418413870f2bb61480c1c3614635df00 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:45.129 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:45.387 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.EEU 00:16:45.387 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.EEU 00:16:45.387 07:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.EEU 00:16:45.387 07:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:45.387 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:45.387 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.387 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:45.387 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:45.387 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:45.387 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:45.387 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c0bda1f294c519a4b64655bb8cb828b0e697ce378203a924 00:16:45.387 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:45.387 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.yku 00:16:45.387 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c0bda1f294c519a4b64655bb8cb828b0e697ce378203a924 2 00:16:45.387 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c0bda1f294c519a4b64655bb8cb828b0e697ce378203a924 2 00:16:45.387 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.387 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.387 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c0bda1f294c519a4b64655bb8cb828b0e697ce378203a924 00:16:45.387 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:45.387 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:45.387 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.yku 00:16:45.387 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.yku 00:16:45.387 07:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.yku 00:16:45.388 07:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:45.388 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:45.388 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.388 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:45.388 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:45.388 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:45.388 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:45.388 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7399406709f3e41056e85a3cc57539ad 00:16:45.388 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:45.388 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.gMg 00:16:45.388 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7399406709f3e41056e85a3cc57539ad 1 00:16:45.388 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7399406709f3e41056e85a3cc57539ad 1 00:16:45.388 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.388 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.388 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7399406709f3e41056e85a3cc57539ad 00:16:45.388 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:45.388 07:52:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.gMg 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.gMg 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.gMg 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=75dbebdc6cbcd8b13e9608f4863325f88966c593a088f6d8c9939648b2ceabbe 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.2o6 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 75dbebdc6cbcd8b13e9608f4863325f88966c593a088f6d8c9939648b2ceabbe 3 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 75dbebdc6cbcd8b13e9608f4863325f88966c593a088f6d8c9939648b2ceabbe 3 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=75dbebdc6cbcd8b13e9608f4863325f88966c593a088f6d8c9939648b2ceabbe 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.2o6 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.2o6 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.2o6 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3249510 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3249510 ']' 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.388 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.646 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.646 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:45.646 07:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3249639 /var/tmp/host.sock 00:16:45.646 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3249639 ']' 00:16:45.646 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:45.646 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.646 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:45.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:45.646 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.646 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.904 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.904 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:45.904 07:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:45.904 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.904 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.904 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.904 07:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:45.904 07:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.RrX 00:16:45.904 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.904 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.904 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.904 07:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.RrX 00:16:45.904 07:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.RrX 00:16:46.162 07:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.IpK ]] 00:16:46.162 07:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.IpK 00:16:46.162 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.162 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.162 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.162 07:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.IpK 00:16:46.162 07:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.IpK 00:16:46.162 07:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:46.162 07:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.qhV 00:16:46.162 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.162 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.162 07:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.162 07:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.qhV 00:16:46.162 07:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.qhV 00:16:46.420 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.EEU ]] 00:16:46.420 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.EEU 00:16:46.420 07:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.420 07:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.420 07:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.420 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.EEU 00:16:46.420 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.EEU 00:16:46.679 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:46.679 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.yku 00:16:46.679 07:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.679 07:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.679 07:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.679 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.yku 00:16:46.679 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.yku 00:16:46.679 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.gMg ]] 00:16:46.679 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gMg 00:16:46.679 07:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.679 07:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.679 07:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.679 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gMg 00:16:46.679 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gMg 00:16:46.938 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:46.938 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.2o6 00:16:46.938 07:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.938 07:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.938 07:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.938 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.2o6 00:16:46.938 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.2o6 00:16:47.196 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:47.196 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:47.196 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.196 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.196 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:47.196 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:47.196 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:47.196 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.196 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:47.196 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:47.196 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:47.196 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.196 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.196 07:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.196 07:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.196 07:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.196 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.196 07:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.455 00:16:47.455 07:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.455 07:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.455 07:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.713 07:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.713 07:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.713 07:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.713 07:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.713 07:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.713 07:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.713 { 00:16:47.713 "cntlid": 1, 00:16:47.713 "qid": 0, 00:16:47.713 "state": "enabled", 00:16:47.713 "thread": "nvmf_tgt_poll_group_000", 00:16:47.713 "listen_address": { 00:16:47.713 "trtype": "TCP", 00:16:47.713 "adrfam": "IPv4", 00:16:47.713 "traddr": "10.0.0.2", 00:16:47.713 "trsvcid": "4420" 00:16:47.713 }, 00:16:47.713 "peer_address": { 00:16:47.713 "trtype": "TCP", 00:16:47.713 "adrfam": "IPv4", 00:16:47.713 "traddr": "10.0.0.1", 00:16:47.713 "trsvcid": "34680" 00:16:47.713 }, 00:16:47.713 "auth": { 00:16:47.713 "state": "completed", 00:16:47.713 "digest": "sha256", 00:16:47.713 "dhgroup": "null" 00:16:47.713 } 00:16:47.713 } 00:16:47.713 ]' 00:16:47.713 07:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.713 07:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.713 07:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.713 07:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:47.713 07:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.972 07:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.972 07:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.972 07:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.972 07:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjVkNTRkNTY3Y2VlZmM2Njg4YTJhNzNkOTI5NTA1NDc2NjgwMDU4NDkyNmNjMTg3aQYUig==: --dhchap-ctrl-secret DHHC-1:03:ZjNmNzNjMWMwYjQzMzQ5ZWNmZDIyODA2ZThmYjRiNmM3ZmM5NjhkNGRmMDQ5ODljYTYzMmU2YWMwZTk2MWIyN6QHuhc=: 00:16:48.540 07:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.540 07:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.540 07:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.540 07:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.540 07:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.540 07:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.540 07:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:48.540 07:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:48.798 07:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:48.798 07:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.798 07:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:48.798 07:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:48.798 07:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:48.798 07:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.798 07:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.798 07:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.798 07:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.798 07:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.798 07:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.798 07:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.056 00:16:49.056 07:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.056 07:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.056 07:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.315 07:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.315 07:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.315 07:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.315 07:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.315 07:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.315 07:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.315 { 00:16:49.315 "cntlid": 3, 00:16:49.315 "qid": 0, 00:16:49.315 "state": "enabled", 00:16:49.315 "thread": "nvmf_tgt_poll_group_000", 00:16:49.315 "listen_address": { 00:16:49.315 "trtype": "TCP", 00:16:49.315 "adrfam": "IPv4", 00:16:49.315 "traddr": "10.0.0.2", 00:16:49.315 "trsvcid": "4420" 00:16:49.315 }, 00:16:49.315 "peer_address": { 00:16:49.315 "trtype": "TCP", 00:16:49.315 "adrfam": "IPv4", 00:16:49.315 "traddr": "10.0.0.1", 00:16:49.315 "trsvcid": "34704" 00:16:49.315 }, 00:16:49.315 "auth": { 00:16:49.315 "state": "completed", 00:16:49.315 "digest": "sha256", 00:16:49.315 "dhgroup": "null" 00:16:49.315 } 00:16:49.315 } 00:16:49.315 ]' 00:16:49.315 07:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.315 07:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.315 07:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.315 07:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:49.315 07:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.315 07:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.315 07:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.315 07:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.574 07:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ODMyY2E1YjQ3OTkzYTk3NmVmMmRmNTAyYmQ0OWY1MTY0Meom: --dhchap-ctrl-secret DHHC-1:02:OTEzMzJiZWRjZjkxYmU5OTQxODQxMzg3MGYyYmI2MTQ4MGMxYzM2MTQ2MzVkZjAwwdfxQQ==: 00:16:50.140 07:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.140 07:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.140 07:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.140 07:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.140 07:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.140 07:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.140 07:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:50.140 07:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:50.398 07:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:50.398 07:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.398 07:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:50.398 07:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:50.398 07:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:50.398 07:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.398 07:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.398 07:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.398 07:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.398 07:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.398 07:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.398 07:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.657 00:16:50.657 07:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.657 07:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.657 07:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.657 07:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.916 07:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.916 07:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.916 07:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.916 07:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.916 07:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.916 { 00:16:50.916 "cntlid": 5, 00:16:50.916 "qid": 0, 00:16:50.916 "state": "enabled", 00:16:50.916 "thread": "nvmf_tgt_poll_group_000", 00:16:50.916 "listen_address": { 00:16:50.916 "trtype": "TCP", 00:16:50.916 "adrfam": "IPv4", 00:16:50.916 "traddr": "10.0.0.2", 00:16:50.916 "trsvcid": "4420" 00:16:50.916 }, 00:16:50.916 "peer_address": { 00:16:50.916 "trtype": "TCP", 00:16:50.916 "adrfam": "IPv4", 00:16:50.916 "traddr": "10.0.0.1", 00:16:50.916 "trsvcid": "34728" 00:16:50.916 }, 00:16:50.916 "auth": { 00:16:50.916 "state": "completed", 00:16:50.916 "digest": "sha256", 00:16:50.916 "dhgroup": "null" 00:16:50.916 } 00:16:50.916 } 00:16:50.916 ]' 00:16:50.916 07:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.916 07:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.916 07:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.916 07:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:50.916 07:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.916 07:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.916 07:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.916 07:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.175 07:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YzBiZGExZjI5NGM1MTlhNGI2NDY1NWJiOGNiODI4YjBlNjk3Y2UzNzgyMDNhOTI0ZifQMg==: --dhchap-ctrl-secret DHHC-1:01:NzM5OTQwNjcwOWYzZTQxMDU2ZTg1YTNjYzU3NTM5YWQCMAyS: 00:16:51.738 07:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.738 07:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.738 07:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.738 07:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.738 07:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.738 07:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.738 07:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:51.738 07:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:51.996 07:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:51.996 07:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.996 07:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:51.996 07:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:51.996 07:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:51.996 07:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.996 07:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:51.996 07:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.996 07:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.996 07:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.996 07:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:51.996 07:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:51.996 00:16:52.254 07:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.254 07:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.254 07:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.254 07:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.254 07:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.254 07:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.254 07:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.254 07:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.254 07:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.254 { 00:16:52.254 "cntlid": 7, 00:16:52.254 "qid": 0, 00:16:52.254 "state": "enabled", 00:16:52.254 "thread": "nvmf_tgt_poll_group_000", 00:16:52.254 "listen_address": { 00:16:52.254 "trtype": "TCP", 00:16:52.254 "adrfam": "IPv4", 00:16:52.254 "traddr": "10.0.0.2", 00:16:52.254 "trsvcid": "4420" 00:16:52.254 }, 00:16:52.254 "peer_address": { 00:16:52.254 "trtype": "TCP", 00:16:52.254 "adrfam": "IPv4", 00:16:52.254 "traddr": "10.0.0.1", 00:16:52.254 "trsvcid": "34746" 00:16:52.254 }, 00:16:52.254 "auth": { 00:16:52.254 "state": "completed", 00:16:52.254 "digest": "sha256", 00:16:52.254 "dhgroup": "null" 00:16:52.254 } 00:16:52.254 } 00:16:52.254 ]' 00:16:52.254 07:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.254 07:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.254 07:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.512 07:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:52.512 07:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.512 07:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.512 07:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.512 07:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.770 07:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVkYmViZGM2Y2JjZDhiMTNlOTYwOGY0ODYzMzI1Zjg4OTY2YzU5M2EwODhmNmQ4Yzk5Mzk2NDhiMmNlYWJiZemuwD4=: 00:16:53.337 07:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.337 07:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.337 07:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.337 07:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.337 07:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.337 07:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.337 07:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.337 07:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:53.337 07:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:53.337 07:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:53.337 07:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.337 07:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:53.337 07:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:53.337 07:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:53.337 07:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.337 07:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.337 07:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.337 07:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.337 07:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.337 07:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.337 07:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.596 00:16:53.596 07:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.596 07:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.596 07:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.855 07:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.855 07:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.855 07:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.855 07:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.855 07:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.855 07:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.855 { 00:16:53.855 "cntlid": 9, 00:16:53.855 "qid": 0, 00:16:53.855 "state": "enabled", 00:16:53.855 "thread": "nvmf_tgt_poll_group_000", 00:16:53.855 "listen_address": { 00:16:53.855 "trtype": "TCP", 00:16:53.855 "adrfam": "IPv4", 00:16:53.855 "traddr": "10.0.0.2", 00:16:53.855 "trsvcid": "4420" 00:16:53.855 }, 00:16:53.855 "peer_address": { 00:16:53.855 "trtype": "TCP", 00:16:53.855 "adrfam": "IPv4", 00:16:53.855 "traddr": "10.0.0.1", 00:16:53.855 "trsvcid": "46772" 00:16:53.855 }, 00:16:53.855 "auth": { 00:16:53.855 "state": "completed", 00:16:53.855 "digest": "sha256", 00:16:53.855 "dhgroup": "ffdhe2048" 00:16:53.855 } 00:16:53.855 } 00:16:53.855 ]' 00:16:53.855 07:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.855 07:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.855 07:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.855 07:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:53.855 07:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.855 07:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.855 07:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.855 07:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.114 07:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjVkNTRkNTY3Y2VlZmM2Njg4YTJhNzNkOTI5NTA1NDc2NjgwMDU4NDkyNmNjMTg3aQYUig==: --dhchap-ctrl-secret DHHC-1:03:ZjNmNzNjMWMwYjQzMzQ5ZWNmZDIyODA2ZThmYjRiNmM3ZmM5NjhkNGRmMDQ5ODljYTYzMmU2YWMwZTk2MWIyN6QHuhc=: 00:16:54.682 07:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.682 07:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.682 07:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.682 07:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.682 07:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.682 07:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.682 07:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:54.682 07:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:54.942 07:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:54.942 07:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.942 07:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:54.942 07:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:54.942 07:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:54.942 07:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.942 07:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.942 07:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.942 07:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.942 07:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.942 07:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.942 07:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.202 00:16:55.202 07:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.202 07:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.202 07:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.202 07:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.202 07:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.202 07:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.202 07:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.202 07:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.202 07:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.202 { 00:16:55.202 "cntlid": 11, 00:16:55.202 "qid": 0, 00:16:55.202 "state": "enabled", 00:16:55.202 "thread": "nvmf_tgt_poll_group_000", 00:16:55.202 "listen_address": { 00:16:55.202 "trtype": "TCP", 00:16:55.202 "adrfam": "IPv4", 00:16:55.202 "traddr": "10.0.0.2", 00:16:55.202 "trsvcid": "4420" 00:16:55.202 }, 00:16:55.202 "peer_address": { 00:16:55.202 "trtype": "TCP", 00:16:55.202 "adrfam": "IPv4", 00:16:55.202 "traddr": "10.0.0.1", 00:16:55.202 "trsvcid": "46790" 00:16:55.202 }, 00:16:55.202 "auth": { 00:16:55.202 "state": "completed", 00:16:55.202 "digest": "sha256", 00:16:55.202 "dhgroup": "ffdhe2048" 00:16:55.202 } 00:16:55.202 } 00:16:55.202 ]' 00:16:55.202 07:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.461 07:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.461 07:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.461 07:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:55.461 07:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.461 07:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.461 07:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.461 07:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.720 07:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ODMyY2E1YjQ3OTkzYTk3NmVmMmRmNTAyYmQ0OWY1MTY0Meom: --dhchap-ctrl-secret DHHC-1:02:OTEzMzJiZWRjZjkxYmU5OTQxODQxMzg3MGYyYmI2MTQ4MGMxYzM2MTQ2MzVkZjAwwdfxQQ==: 00:16:56.288 07:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.288 07:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.288 07:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.288 07:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.288 07:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.288 07:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.288 07:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:56.288 07:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:56.288 07:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:56.288 07:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.288 07:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:56.288 07:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:56.288 07:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:56.288 07:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.288 07:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.288 07:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.288 07:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.288 07:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.288 07:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.288 07:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.548 00:16:56.548 07:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.548 07:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.548 07:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.807 07:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.807 07:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.807 07:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.807 07:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.807 07:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.807 07:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.807 { 00:16:56.807 "cntlid": 13, 00:16:56.807 "qid": 0, 00:16:56.807 "state": "enabled", 00:16:56.807 "thread": "nvmf_tgt_poll_group_000", 00:16:56.807 "listen_address": { 00:16:56.807 "trtype": "TCP", 00:16:56.807 "adrfam": "IPv4", 00:16:56.807 "traddr": "10.0.0.2", 00:16:56.807 "trsvcid": "4420" 00:16:56.807 }, 00:16:56.807 "peer_address": { 00:16:56.807 "trtype": "TCP", 00:16:56.807 "adrfam": "IPv4", 00:16:56.807 "traddr": "10.0.0.1", 00:16:56.807 "trsvcid": "46818" 00:16:56.807 }, 00:16:56.807 "auth": { 00:16:56.807 "state": "completed", 00:16:56.807 "digest": "sha256", 00:16:56.807 "dhgroup": "ffdhe2048" 00:16:56.807 } 00:16:56.807 } 00:16:56.807 ]' 00:16:56.807 07:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.807 07:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.807 07:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.807 07:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:56.807 07:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.063 07:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.063 07:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.063 07:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.063 07:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YzBiZGExZjI5NGM1MTlhNGI2NDY1NWJiOGNiODI4YjBlNjk3Y2UzNzgyMDNhOTI0ZifQMg==: --dhchap-ctrl-secret DHHC-1:01:NzM5OTQwNjcwOWYzZTQxMDU2ZTg1YTNjYzU3NTM5YWQCMAyS: 00:16:57.627 07:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.627 07:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.627 07:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.627 07:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.627 07:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.627 07:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.627 07:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:57.627 07:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:57.886 07:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:57.886 07:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.886 07:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:57.886 07:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:57.886 07:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:57.886 07:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.886 07:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:57.886 07:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.886 07:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.886 07:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.886 07:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:57.886 07:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:58.184 00:16:58.184 07:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.184 07:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.184 07:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.444 07:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.444 07:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.444 07:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.444 07:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.444 07:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.444 07:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.444 { 00:16:58.444 "cntlid": 15, 00:16:58.444 "qid": 0, 00:16:58.444 "state": "enabled", 00:16:58.444 "thread": "nvmf_tgt_poll_group_000", 00:16:58.444 "listen_address": { 00:16:58.444 "trtype": "TCP", 00:16:58.444 "adrfam": "IPv4", 00:16:58.444 "traddr": "10.0.0.2", 00:16:58.444 "trsvcid": "4420" 00:16:58.444 }, 00:16:58.444 "peer_address": { 00:16:58.444 "trtype": "TCP", 00:16:58.444 "adrfam": "IPv4", 00:16:58.444 "traddr": "10.0.0.1", 00:16:58.444 "trsvcid": "46856" 00:16:58.444 }, 00:16:58.444 "auth": { 00:16:58.444 "state": "completed", 00:16:58.444 "digest": "sha256", 00:16:58.444 "dhgroup": "ffdhe2048" 00:16:58.444 } 00:16:58.444 } 00:16:58.444 ]' 00:16:58.444 07:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.444 07:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.444 07:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.444 07:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:58.444 07:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.444 07:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.444 07:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.444 07:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.702 07:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVkYmViZGM2Y2JjZDhiMTNlOTYwOGY0ODYzMzI1Zjg4OTY2YzU5M2EwODhmNmQ4Yzk5Mzk2NDhiMmNlYWJiZemuwD4=: 00:16:59.270 07:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.270 07:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:59.270 07:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.270 07:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.270 07:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.270 07:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.270 07:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.270 07:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:59.270 07:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:59.270 07:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:59.270 07:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.270 07:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:59.270 07:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:59.270 07:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:59.270 07:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.270 07:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.270 07:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.270 07:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.270 07:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.270 07:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.270 07:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.528 00:16:59.528 07:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.528 07:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.528 07:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.787 07:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.787 07:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.787 07:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.787 07:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.787 07:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.787 07:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.787 { 00:16:59.787 "cntlid": 17, 00:16:59.787 "qid": 0, 00:16:59.787 "state": "enabled", 00:16:59.787 "thread": "nvmf_tgt_poll_group_000", 00:16:59.787 "listen_address": { 00:16:59.787 "trtype": "TCP", 00:16:59.787 "adrfam": "IPv4", 00:16:59.787 "traddr": "10.0.0.2", 00:16:59.787 "trsvcid": "4420" 00:16:59.787 }, 00:16:59.787 "peer_address": { 00:16:59.787 "trtype": "TCP", 00:16:59.787 "adrfam": "IPv4", 00:16:59.787 "traddr": "10.0.0.1", 00:16:59.787 "trsvcid": "46886" 00:16:59.787 }, 00:16:59.787 "auth": { 00:16:59.787 "state": "completed", 00:16:59.787 "digest": "sha256", 00:16:59.787 "dhgroup": "ffdhe3072" 00:16:59.787 } 00:16:59.787 } 00:16:59.787 ]' 00:16:59.787 07:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.787 07:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.787 07:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.045 07:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:00.045 07:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.045 07:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.045 07:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.045 07:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.045 07:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjVkNTRkNTY3Y2VlZmM2Njg4YTJhNzNkOTI5NTA1NDc2NjgwMDU4NDkyNmNjMTg3aQYUig==: --dhchap-ctrl-secret DHHC-1:03:ZjNmNzNjMWMwYjQzMzQ5ZWNmZDIyODA2ZThmYjRiNmM3ZmM5NjhkNGRmMDQ5ODljYTYzMmU2YWMwZTk2MWIyN6QHuhc=: 00:17:00.611 07:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.611 07:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.611 07:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.611 07:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.611 07:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.611 07:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.611 07:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:00.611 07:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:00.870 07:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:00.870 07:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.870 07:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:00.870 07:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:00.870 07:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:00.870 07:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.870 07:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.870 07:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.870 07:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.870 07:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.870 07:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.870 07:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.129 00:17:01.129 07:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.129 07:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.129 07:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.388 07:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.388 07:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.388 07:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.388 07:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.388 07:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.388 07:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.388 { 00:17:01.388 "cntlid": 19, 00:17:01.388 "qid": 0, 00:17:01.388 "state": "enabled", 00:17:01.388 "thread": "nvmf_tgt_poll_group_000", 00:17:01.388 "listen_address": { 00:17:01.388 "trtype": "TCP", 00:17:01.388 "adrfam": "IPv4", 00:17:01.388 "traddr": "10.0.0.2", 00:17:01.388 "trsvcid": "4420" 00:17:01.388 }, 00:17:01.388 "peer_address": { 00:17:01.388 "trtype": "TCP", 00:17:01.388 "adrfam": "IPv4", 00:17:01.388 "traddr": "10.0.0.1", 00:17:01.388 "trsvcid": "46904" 00:17:01.388 }, 00:17:01.388 "auth": { 00:17:01.388 "state": "completed", 00:17:01.388 "digest": "sha256", 00:17:01.388 "dhgroup": "ffdhe3072" 00:17:01.388 } 00:17:01.388 } 00:17:01.388 ]' 00:17:01.388 07:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.388 07:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.388 07:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.388 07:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:01.388 07:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.388 07:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.388 07:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.388 07:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.646 07:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ODMyY2E1YjQ3OTkzYTk3NmVmMmRmNTAyYmQ0OWY1MTY0Meom: --dhchap-ctrl-secret DHHC-1:02:OTEzMzJiZWRjZjkxYmU5OTQxODQxMzg3MGYyYmI2MTQ4MGMxYzM2MTQ2MzVkZjAwwdfxQQ==: 00:17:02.214 07:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.214 07:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.214 07:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.214 07:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.214 07:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.214 07:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.214 07:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:02.214 07:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:02.473 07:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:02.473 07:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.473 07:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:02.474 07:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:02.474 07:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:02.474 07:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.474 07:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.474 07:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.474 07:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.474 07:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.474 07:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.474 07:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.733 00:17:02.733 07:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.733 07:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.733 07:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.733 07:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.992 07:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.992 07:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.992 07:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.992 07:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.992 07:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.992 { 00:17:02.992 "cntlid": 21, 00:17:02.992 "qid": 0, 00:17:02.992 "state": "enabled", 00:17:02.992 "thread": "nvmf_tgt_poll_group_000", 00:17:02.992 "listen_address": { 00:17:02.992 "trtype": "TCP", 00:17:02.992 "adrfam": "IPv4", 00:17:02.992 "traddr": "10.0.0.2", 00:17:02.992 "trsvcid": "4420" 00:17:02.992 }, 00:17:02.992 "peer_address": { 00:17:02.992 "trtype": "TCP", 00:17:02.992 "adrfam": "IPv4", 00:17:02.992 "traddr": "10.0.0.1", 00:17:02.992 "trsvcid": "35636" 00:17:02.992 }, 00:17:02.992 "auth": { 00:17:02.992 "state": "completed", 00:17:02.992 "digest": "sha256", 00:17:02.992 "dhgroup": "ffdhe3072" 00:17:02.992 } 00:17:02.992 } 00:17:02.992 ]' 00:17:02.992 07:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.992 07:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.992 07:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.992 07:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:02.992 07:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.992 07:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.992 07:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.992 07:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.250 07:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YzBiZGExZjI5NGM1MTlhNGI2NDY1NWJiOGNiODI4YjBlNjk3Y2UzNzgyMDNhOTI0ZifQMg==: --dhchap-ctrl-secret DHHC-1:01:NzM5OTQwNjcwOWYzZTQxMDU2ZTg1YTNjYzU3NTM5YWQCMAyS: 00:17:03.819 07:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.819 07:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.819 07:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.819 07:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.819 07:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.819 07:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.819 07:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:03.819 07:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:04.078 07:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:04.078 07:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.078 07:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:04.078 07:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:04.078 07:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:04.078 07:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.078 07:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:04.078 07:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.078 07:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.078 07:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.078 07:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.078 07:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.078 00:17:04.338 07:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.338 07:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.338 07:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.338 07:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.338 07:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.338 07:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.338 07:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.338 07:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.338 07:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.338 { 00:17:04.338 "cntlid": 23, 00:17:04.338 "qid": 0, 00:17:04.338 "state": "enabled", 00:17:04.338 "thread": "nvmf_tgt_poll_group_000", 00:17:04.338 "listen_address": { 00:17:04.338 "trtype": "TCP", 00:17:04.338 "adrfam": "IPv4", 00:17:04.338 "traddr": "10.0.0.2", 00:17:04.338 "trsvcid": "4420" 00:17:04.338 }, 00:17:04.338 "peer_address": { 00:17:04.338 "trtype": "TCP", 00:17:04.338 "adrfam": "IPv4", 00:17:04.338 "traddr": "10.0.0.1", 00:17:04.338 "trsvcid": "35670" 00:17:04.338 }, 00:17:04.338 "auth": { 00:17:04.338 "state": "completed", 00:17:04.338 "digest": "sha256", 00:17:04.338 "dhgroup": "ffdhe3072" 00:17:04.338 } 00:17:04.338 } 00:17:04.338 ]' 00:17:04.338 07:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.338 07:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.338 07:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.597 07:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:04.597 07:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.597 07:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.597 07:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.597 07:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.597 07:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVkYmViZGM2Y2JjZDhiMTNlOTYwOGY0ODYzMzI1Zjg4OTY2YzU5M2EwODhmNmQ4Yzk5Mzk2NDhiMmNlYWJiZemuwD4=: 00:17:05.166 07:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.166 07:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.166 07:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.166 07:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.166 07:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.166 07:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.166 07:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.166 07:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:05.166 07:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:05.425 07:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:05.425 07:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.425 07:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:05.425 07:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:05.425 07:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:05.425 07:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.425 07:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.425 07:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.425 07:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.425 07:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.425 07:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.425 07:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.684 00:17:05.684 07:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.684 07:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.684 07:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.943 07:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.943 07:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.943 07:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.943 07:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.943 07:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.943 07:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.944 { 00:17:05.944 "cntlid": 25, 00:17:05.944 "qid": 0, 00:17:05.944 "state": "enabled", 00:17:05.944 "thread": "nvmf_tgt_poll_group_000", 00:17:05.944 "listen_address": { 00:17:05.944 "trtype": "TCP", 00:17:05.944 "adrfam": "IPv4", 00:17:05.944 "traddr": "10.0.0.2", 00:17:05.944 "trsvcid": "4420" 00:17:05.944 }, 00:17:05.944 "peer_address": { 00:17:05.944 "trtype": "TCP", 00:17:05.944 "adrfam": "IPv4", 00:17:05.944 "traddr": "10.0.0.1", 00:17:05.944 "trsvcid": "35692" 00:17:05.944 }, 00:17:05.944 "auth": { 00:17:05.944 "state": "completed", 00:17:05.944 "digest": "sha256", 00:17:05.944 "dhgroup": "ffdhe4096" 00:17:05.944 } 00:17:05.944 } 00:17:05.944 ]' 00:17:05.944 07:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.944 07:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:05.944 07:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.944 07:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:05.944 07:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.202 07:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.202 07:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.202 07:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.202 07:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjVkNTRkNTY3Y2VlZmM2Njg4YTJhNzNkOTI5NTA1NDc2NjgwMDU4NDkyNmNjMTg3aQYUig==: --dhchap-ctrl-secret DHHC-1:03:ZjNmNzNjMWMwYjQzMzQ5ZWNmZDIyODA2ZThmYjRiNmM3ZmM5NjhkNGRmMDQ5ODljYTYzMmU2YWMwZTk2MWIyN6QHuhc=: 00:17:06.771 07:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.771 07:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.771 07:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.771 07:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.771 07:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.771 07:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.771 07:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:06.771 07:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:07.030 07:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:07.030 07:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.030 07:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:07.030 07:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:07.030 07:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:07.030 07:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.030 07:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.030 07:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.030 07:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.030 07:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.030 07:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.030 07:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.289 00:17:07.289 07:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.289 07:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.289 07:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.549 07:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.549 07:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.549 07:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.549 07:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.549 07:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.549 07:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.549 { 00:17:07.549 "cntlid": 27, 00:17:07.549 "qid": 0, 00:17:07.549 "state": "enabled", 00:17:07.549 "thread": "nvmf_tgt_poll_group_000", 00:17:07.549 "listen_address": { 00:17:07.549 "trtype": "TCP", 00:17:07.549 "adrfam": "IPv4", 00:17:07.549 "traddr": "10.0.0.2", 00:17:07.549 "trsvcid": "4420" 00:17:07.549 }, 00:17:07.549 "peer_address": { 00:17:07.549 "trtype": "TCP", 00:17:07.549 "adrfam": "IPv4", 00:17:07.549 "traddr": "10.0.0.1", 00:17:07.549 "trsvcid": "35732" 00:17:07.549 }, 00:17:07.549 "auth": { 00:17:07.549 "state": "completed", 00:17:07.549 "digest": "sha256", 00:17:07.549 "dhgroup": "ffdhe4096" 00:17:07.549 } 00:17:07.549 } 00:17:07.549 ]' 00:17:07.549 07:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.549 07:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:07.549 07:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.549 07:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:07.549 07:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.549 07:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.549 07:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.549 07:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.808 07:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ODMyY2E1YjQ3OTkzYTk3NmVmMmRmNTAyYmQ0OWY1MTY0Meom: --dhchap-ctrl-secret DHHC-1:02:OTEzMzJiZWRjZjkxYmU5OTQxODQxMzg3MGYyYmI2MTQ4MGMxYzM2MTQ2MzVkZjAwwdfxQQ==: 00:17:08.376 07:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.376 07:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.376 07:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.376 07:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.376 07:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.376 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.376 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:08.376 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:08.635 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:08.635 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.635 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:08.635 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:08.635 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:08.635 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.635 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.635 07:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.635 07:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.635 07:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.636 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.636 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.892 00:17:08.892 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.892 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.892 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.149 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.149 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.149 07:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.149 07:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.149 07:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.149 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.149 { 00:17:09.149 "cntlid": 29, 00:17:09.149 "qid": 0, 00:17:09.149 "state": "enabled", 00:17:09.149 "thread": "nvmf_tgt_poll_group_000", 00:17:09.149 "listen_address": { 00:17:09.149 "trtype": "TCP", 00:17:09.149 "adrfam": "IPv4", 00:17:09.149 "traddr": "10.0.0.2", 00:17:09.149 "trsvcid": "4420" 00:17:09.149 }, 00:17:09.149 "peer_address": { 00:17:09.149 "trtype": "TCP", 00:17:09.149 "adrfam": "IPv4", 00:17:09.149 "traddr": "10.0.0.1", 00:17:09.149 "trsvcid": "35762" 00:17:09.149 }, 00:17:09.149 "auth": { 00:17:09.149 "state": "completed", 00:17:09.149 "digest": "sha256", 00:17:09.149 "dhgroup": "ffdhe4096" 00:17:09.149 } 00:17:09.149 } 00:17:09.149 ]' 00:17:09.149 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.149 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:09.149 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.149 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:09.149 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.149 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.149 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.149 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.407 07:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YzBiZGExZjI5NGM1MTlhNGI2NDY1NWJiOGNiODI4YjBlNjk3Y2UzNzgyMDNhOTI0ZifQMg==: --dhchap-ctrl-secret DHHC-1:01:NzM5OTQwNjcwOWYzZTQxMDU2ZTg1YTNjYzU3NTM5YWQCMAyS: 00:17:09.973 07:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.973 07:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.973 07:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.973 07:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.973 07:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.973 07:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.973 07:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:09.974 07:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:10.246 07:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:10.246 07:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.246 07:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:10.246 07:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:10.246 07:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:10.246 07:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.246 07:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:10.246 07:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.246 07:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.246 07:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.246 07:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.246 07:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.246 00:17:10.504 07:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.504 07:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.504 07:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.504 07:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.504 07:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.504 07:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.504 07:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.504 07:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.504 07:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.504 { 00:17:10.504 "cntlid": 31, 00:17:10.504 "qid": 0, 00:17:10.504 "state": "enabled", 00:17:10.504 "thread": "nvmf_tgt_poll_group_000", 00:17:10.504 "listen_address": { 00:17:10.504 "trtype": "TCP", 00:17:10.504 "adrfam": "IPv4", 00:17:10.504 "traddr": "10.0.0.2", 00:17:10.504 "trsvcid": "4420" 00:17:10.504 }, 00:17:10.504 "peer_address": { 00:17:10.504 "trtype": "TCP", 00:17:10.504 "adrfam": "IPv4", 00:17:10.504 "traddr": "10.0.0.1", 00:17:10.504 "trsvcid": "35784" 00:17:10.504 }, 00:17:10.504 "auth": { 00:17:10.504 "state": "completed", 00:17:10.504 "digest": "sha256", 00:17:10.504 "dhgroup": "ffdhe4096" 00:17:10.504 } 00:17:10.504 } 00:17:10.504 ]' 00:17:10.504 07:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.504 07:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:10.505 07:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.763 07:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:10.763 07:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.763 07:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.763 07:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.763 07:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.763 07:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVkYmViZGM2Y2JjZDhiMTNlOTYwOGY0ODYzMzI1Zjg4OTY2YzU5M2EwODhmNmQ4Yzk5Mzk2NDhiMmNlYWJiZemuwD4=: 00:17:11.331 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.331 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:11.331 07:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.331 07:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.331 07:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.331 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.331 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.331 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:11.331 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:11.590 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:17:11.590 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.590 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:11.590 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:11.590 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:11.590 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.590 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.590 07:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.590 07:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.590 07:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.590 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.590 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.849 00:17:12.108 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.108 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.108 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.108 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.108 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.108 07:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.108 07:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.108 07:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.108 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.108 { 00:17:12.108 "cntlid": 33, 00:17:12.108 "qid": 0, 00:17:12.108 "state": "enabled", 00:17:12.108 "thread": "nvmf_tgt_poll_group_000", 00:17:12.108 "listen_address": { 00:17:12.108 "trtype": "TCP", 00:17:12.108 "adrfam": "IPv4", 00:17:12.108 "traddr": "10.0.0.2", 00:17:12.108 "trsvcid": "4420" 00:17:12.108 }, 00:17:12.108 "peer_address": { 00:17:12.108 "trtype": "TCP", 00:17:12.108 "adrfam": "IPv4", 00:17:12.108 "traddr": "10.0.0.1", 00:17:12.108 "trsvcid": "35822" 00:17:12.108 }, 00:17:12.108 "auth": { 00:17:12.108 "state": "completed", 00:17:12.108 "digest": "sha256", 00:17:12.108 "dhgroup": "ffdhe6144" 00:17:12.108 } 00:17:12.108 } 00:17:12.108 ]' 00:17:12.108 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.108 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.108 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.366 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:12.366 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.366 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.366 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.366 07:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.366 07:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjVkNTRkNTY3Y2VlZmM2Njg4YTJhNzNkOTI5NTA1NDc2NjgwMDU4NDkyNmNjMTg3aQYUig==: --dhchap-ctrl-secret DHHC-1:03:ZjNmNzNjMWMwYjQzMzQ5ZWNmZDIyODA2ZThmYjRiNmM3ZmM5NjhkNGRmMDQ5ODljYTYzMmU2YWMwZTk2MWIyN6QHuhc=: 00:17:12.934 07:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.934 07:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.934 07:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.934 07:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.934 07:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.934 07:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.934 07:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:12.934 07:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:13.193 07:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:13.193 07:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.193 07:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:13.193 07:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:13.193 07:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:13.193 07:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.193 07:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.193 07:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.193 07:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.193 07:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.193 07:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.193 07:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.452 00:17:13.711 07:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.711 07:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.711 07:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.711 07:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.711 07:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.711 07:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.711 07:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.711 07:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.711 07:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.711 { 00:17:13.711 "cntlid": 35, 00:17:13.711 "qid": 0, 00:17:13.711 "state": "enabled", 00:17:13.711 "thread": "nvmf_tgt_poll_group_000", 00:17:13.711 "listen_address": { 00:17:13.711 "trtype": "TCP", 00:17:13.711 "adrfam": "IPv4", 00:17:13.711 "traddr": "10.0.0.2", 00:17:13.711 "trsvcid": "4420" 00:17:13.711 }, 00:17:13.711 "peer_address": { 00:17:13.711 "trtype": "TCP", 00:17:13.711 "adrfam": "IPv4", 00:17:13.711 "traddr": "10.0.0.1", 00:17:13.711 "trsvcid": "37790" 00:17:13.711 }, 00:17:13.711 "auth": { 00:17:13.711 "state": "completed", 00:17:13.711 "digest": "sha256", 00:17:13.711 "dhgroup": "ffdhe6144" 00:17:13.711 } 00:17:13.711 } 00:17:13.711 ]' 00:17:13.711 07:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.969 07:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:13.969 07:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.969 07:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:13.969 07:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.969 07:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.969 07:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.969 07:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.228 07:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ODMyY2E1YjQ3OTkzYTk3NmVmMmRmNTAyYmQ0OWY1MTY0Meom: --dhchap-ctrl-secret DHHC-1:02:OTEzMzJiZWRjZjkxYmU5OTQxODQxMzg3MGYyYmI2MTQ4MGMxYzM2MTQ2MzVkZjAwwdfxQQ==: 00:17:14.810 07:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.810 07:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.810 07:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.810 07:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.810 07:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.810 07:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.810 07:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:14.810 07:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:14.810 07:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:14.810 07:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.810 07:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:14.810 07:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:14.810 07:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:14.810 07:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.810 07:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.810 07:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.810 07:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.810 07:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.810 07:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.810 07:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.099 00:17:15.099 07:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.099 07:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.099 07:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.359 07:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.359 07:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.359 07:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.359 07:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.359 07:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.359 07:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.359 { 00:17:15.359 "cntlid": 37, 00:17:15.359 "qid": 0, 00:17:15.359 "state": "enabled", 00:17:15.359 "thread": "nvmf_tgt_poll_group_000", 00:17:15.359 "listen_address": { 00:17:15.359 "trtype": "TCP", 00:17:15.359 "adrfam": "IPv4", 00:17:15.359 "traddr": "10.0.0.2", 00:17:15.359 "trsvcid": "4420" 00:17:15.359 }, 00:17:15.359 "peer_address": { 00:17:15.359 "trtype": "TCP", 00:17:15.359 "adrfam": "IPv4", 00:17:15.359 "traddr": "10.0.0.1", 00:17:15.359 "trsvcid": "37806" 00:17:15.359 }, 00:17:15.359 "auth": { 00:17:15.359 "state": "completed", 00:17:15.359 "digest": "sha256", 00:17:15.359 "dhgroup": "ffdhe6144" 00:17:15.359 } 00:17:15.359 } 00:17:15.359 ]' 00:17:15.359 07:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:15.359 07:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:15.359 07:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.617 07:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:15.617 07:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.617 07:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.617 07:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.618 07:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.618 07:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YzBiZGExZjI5NGM1MTlhNGI2NDY1NWJiOGNiODI4YjBlNjk3Y2UzNzgyMDNhOTI0ZifQMg==: --dhchap-ctrl-secret DHHC-1:01:NzM5OTQwNjcwOWYzZTQxMDU2ZTg1YTNjYzU3NTM5YWQCMAyS: 00:17:16.185 07:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.185 07:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:16.185 07:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.185 07:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.444 07:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.444 07:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:16.444 07:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:16.444 07:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:16.444 07:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:16.444 07:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.444 07:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:16.444 07:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:16.444 07:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:16.444 07:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.444 07:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:16.444 07:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.444 07:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.444 07:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.444 07:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:16.444 07:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:16.702 00:17:16.960 07:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.960 07:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.960 07:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.960 07:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.960 07:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.960 07:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.960 07:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.960 07:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.960 07:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.960 { 00:17:16.960 "cntlid": 39, 00:17:16.960 "qid": 0, 00:17:16.960 "state": "enabled", 00:17:16.960 "thread": "nvmf_tgt_poll_group_000", 00:17:16.960 "listen_address": { 00:17:16.960 "trtype": "TCP", 00:17:16.960 "adrfam": "IPv4", 00:17:16.960 "traddr": "10.0.0.2", 00:17:16.960 "trsvcid": "4420" 00:17:16.960 }, 00:17:16.960 "peer_address": { 00:17:16.960 "trtype": "TCP", 00:17:16.960 "adrfam": "IPv4", 00:17:16.960 "traddr": "10.0.0.1", 00:17:16.960 "trsvcid": "37848" 00:17:16.960 }, 00:17:16.960 "auth": { 00:17:16.960 "state": "completed", 00:17:16.960 "digest": "sha256", 00:17:16.960 "dhgroup": "ffdhe6144" 00:17:16.960 } 00:17:16.960 } 00:17:16.960 ]' 00:17:16.960 07:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.218 07:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:17.218 07:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:17.218 07:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:17.218 07:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:17.218 07:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.218 07:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.218 07:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.476 07:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVkYmViZGM2Y2JjZDhiMTNlOTYwOGY0ODYzMzI1Zjg4OTY2YzU5M2EwODhmNmQ4Yzk5Mzk2NDhiMmNlYWJiZemuwD4=: 00:17:18.041 07:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.041 07:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:18.041 07:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.041 07:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.041 07:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.041 07:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.041 07:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.042 07:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:18.042 07:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:18.042 07:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:18.042 07:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.042 07:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:18.042 07:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:18.042 07:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:18.042 07:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.042 07:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.042 07:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.042 07:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.300 07:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.300 07:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.300 07:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.558 00:17:18.558 07:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.558 07:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.558 07:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.816 07:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.816 07:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.816 07:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.816 07:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.816 07:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.816 07:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.816 { 00:17:18.816 "cntlid": 41, 00:17:18.816 "qid": 0, 00:17:18.816 "state": "enabled", 00:17:18.816 "thread": "nvmf_tgt_poll_group_000", 00:17:18.816 "listen_address": { 00:17:18.816 "trtype": "TCP", 00:17:18.816 "adrfam": "IPv4", 00:17:18.816 "traddr": "10.0.0.2", 00:17:18.816 "trsvcid": "4420" 00:17:18.816 }, 00:17:18.816 "peer_address": { 00:17:18.816 "trtype": "TCP", 00:17:18.816 "adrfam": "IPv4", 00:17:18.816 "traddr": "10.0.0.1", 00:17:18.816 "trsvcid": "37870" 00:17:18.816 }, 00:17:18.816 "auth": { 00:17:18.816 "state": "completed", 00:17:18.816 "digest": "sha256", 00:17:18.816 "dhgroup": "ffdhe8192" 00:17:18.816 } 00:17:18.816 } 00:17:18.816 ]' 00:17:18.816 07:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.816 07:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.816 07:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.816 07:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:18.816 07:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.072 07:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.072 07:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.072 07:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.072 07:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjVkNTRkNTY3Y2VlZmM2Njg4YTJhNzNkOTI5NTA1NDc2NjgwMDU4NDkyNmNjMTg3aQYUig==: --dhchap-ctrl-secret DHHC-1:03:ZjNmNzNjMWMwYjQzMzQ5ZWNmZDIyODA2ZThmYjRiNmM3ZmM5NjhkNGRmMDQ5ODljYTYzMmU2YWMwZTk2MWIyN6QHuhc=: 00:17:19.638 07:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.638 07:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:19.638 07:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.638 07:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.638 07:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.638 07:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.638 07:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:19.638 07:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:19.897 07:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:19.897 07:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.897 07:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:19.897 07:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:19.897 07:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:19.897 07:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.897 07:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.897 07:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.897 07:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.897 07:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.897 07:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.897 07:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.464 00:17:20.464 07:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.464 07:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.464 07:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.724 07:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.724 07:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.724 07:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.724 07:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.724 07:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.724 07:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.724 { 00:17:20.724 "cntlid": 43, 00:17:20.724 "qid": 0, 00:17:20.724 "state": "enabled", 00:17:20.724 "thread": "nvmf_tgt_poll_group_000", 00:17:20.724 "listen_address": { 00:17:20.724 "trtype": "TCP", 00:17:20.724 "adrfam": "IPv4", 00:17:20.724 "traddr": "10.0.0.2", 00:17:20.724 "trsvcid": "4420" 00:17:20.724 }, 00:17:20.724 "peer_address": { 00:17:20.724 "trtype": "TCP", 00:17:20.724 "adrfam": "IPv4", 00:17:20.724 "traddr": "10.0.0.1", 00:17:20.724 "trsvcid": "37910" 00:17:20.724 }, 00:17:20.724 "auth": { 00:17:20.724 "state": "completed", 00:17:20.724 "digest": "sha256", 00:17:20.724 "dhgroup": "ffdhe8192" 00:17:20.724 } 00:17:20.724 } 00:17:20.724 ]' 00:17:20.724 07:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.724 07:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.724 07:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.724 07:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:20.724 07:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.724 07:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.724 07:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.724 07:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.983 07:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ODMyY2E1YjQ3OTkzYTk3NmVmMmRmNTAyYmQ0OWY1MTY0Meom: --dhchap-ctrl-secret DHHC-1:02:OTEzMzJiZWRjZjkxYmU5OTQxODQxMzg3MGYyYmI2MTQ4MGMxYzM2MTQ2MzVkZjAwwdfxQQ==: 00:17:21.551 07:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.551 07:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:21.551 07:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.551 07:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.551 07:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.551 07:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.551 07:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:21.551 07:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:21.810 07:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:21.810 07:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.810 07:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:21.810 07:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:21.810 07:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:21.810 07:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.810 07:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.810 07:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.810 07:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.810 07:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.811 07:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.811 07:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.070 00:17:22.070 07:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.070 07:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.070 07:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.329 07:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.329 07:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.329 07:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.329 07:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.329 07:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.329 07:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.329 { 00:17:22.329 "cntlid": 45, 00:17:22.329 "qid": 0, 00:17:22.329 "state": "enabled", 00:17:22.329 "thread": "nvmf_tgt_poll_group_000", 00:17:22.329 "listen_address": { 00:17:22.329 "trtype": "TCP", 00:17:22.329 "adrfam": "IPv4", 00:17:22.329 "traddr": "10.0.0.2", 00:17:22.329 "trsvcid": "4420" 00:17:22.329 }, 00:17:22.329 "peer_address": { 00:17:22.329 "trtype": "TCP", 00:17:22.329 "adrfam": "IPv4", 00:17:22.329 "traddr": "10.0.0.1", 00:17:22.329 "trsvcid": "37942" 00:17:22.329 }, 00:17:22.329 "auth": { 00:17:22.329 "state": "completed", 00:17:22.329 "digest": "sha256", 00:17:22.329 "dhgroup": "ffdhe8192" 00:17:22.329 } 00:17:22.329 } 00:17:22.329 ]' 00:17:22.329 07:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.329 07:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:22.329 07:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.588 07:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:22.588 07:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.588 07:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.588 07:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.588 07:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.588 07:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YzBiZGExZjI5NGM1MTlhNGI2NDY1NWJiOGNiODI4YjBlNjk3Y2UzNzgyMDNhOTI0ZifQMg==: --dhchap-ctrl-secret DHHC-1:01:NzM5OTQwNjcwOWYzZTQxMDU2ZTg1YTNjYzU3NTM5YWQCMAyS: 00:17:23.155 07:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.156 07:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:23.156 07:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.156 07:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.156 07:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.156 07:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.156 07:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:23.156 07:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:23.415 07:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:23.415 07:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.415 07:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:23.415 07:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:23.415 07:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:23.415 07:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.415 07:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:23.415 07:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.415 07:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.415 07:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.415 07:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.415 07:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.984 00:17:23.984 07:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.984 07:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.984 07:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.241 07:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.241 07:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.241 07:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.241 07:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.241 07:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.241 07:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.241 { 00:17:24.241 "cntlid": 47, 00:17:24.241 "qid": 0, 00:17:24.241 "state": "enabled", 00:17:24.241 "thread": "nvmf_tgt_poll_group_000", 00:17:24.241 "listen_address": { 00:17:24.241 "trtype": "TCP", 00:17:24.241 "adrfam": "IPv4", 00:17:24.241 "traddr": "10.0.0.2", 00:17:24.241 "trsvcid": "4420" 00:17:24.241 }, 00:17:24.241 "peer_address": { 00:17:24.241 "trtype": "TCP", 00:17:24.241 "adrfam": "IPv4", 00:17:24.241 "traddr": "10.0.0.1", 00:17:24.241 "trsvcid": "56936" 00:17:24.241 }, 00:17:24.241 "auth": { 00:17:24.241 "state": "completed", 00:17:24.241 "digest": "sha256", 00:17:24.241 "dhgroup": "ffdhe8192" 00:17:24.241 } 00:17:24.241 } 00:17:24.241 ]' 00:17:24.241 07:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.241 07:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.241 07:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.241 07:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:24.241 07:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.241 07:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.241 07:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.241 07:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.500 07:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVkYmViZGM2Y2JjZDhiMTNlOTYwOGY0ODYzMzI1Zjg4OTY2YzU5M2EwODhmNmQ4Yzk5Mzk2NDhiMmNlYWJiZemuwD4=: 00:17:25.066 07:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.066 07:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.066 07:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.066 07:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.066 07:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.066 07:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:25.066 07:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.066 07:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.066 07:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:25.066 07:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:25.326 07:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:25.326 07:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.326 07:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:25.326 07:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:25.326 07:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:25.326 07:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.326 07:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.326 07:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.326 07:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.326 07:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.326 07:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.326 07:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.326 00:17:25.585 07:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.585 07:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.585 07:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.585 07:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.585 07:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.585 07:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.585 07:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.585 07:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.585 07:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.585 { 00:17:25.585 "cntlid": 49, 00:17:25.585 "qid": 0, 00:17:25.585 "state": "enabled", 00:17:25.585 "thread": "nvmf_tgt_poll_group_000", 00:17:25.585 "listen_address": { 00:17:25.585 "trtype": "TCP", 00:17:25.585 "adrfam": "IPv4", 00:17:25.585 "traddr": "10.0.0.2", 00:17:25.585 "trsvcid": "4420" 00:17:25.585 }, 00:17:25.585 "peer_address": { 00:17:25.585 "trtype": "TCP", 00:17:25.585 "adrfam": "IPv4", 00:17:25.585 "traddr": "10.0.0.1", 00:17:25.585 "trsvcid": "56966" 00:17:25.585 }, 00:17:25.585 "auth": { 00:17:25.585 "state": "completed", 00:17:25.585 "digest": "sha384", 00:17:25.585 "dhgroup": "null" 00:17:25.585 } 00:17:25.585 } 00:17:25.585 ]' 00:17:25.585 07:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.585 07:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.585 07:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.844 07:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:25.844 07:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.844 07:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.844 07:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.844 07:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.103 07:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjVkNTRkNTY3Y2VlZmM2Njg4YTJhNzNkOTI5NTA1NDc2NjgwMDU4NDkyNmNjMTg3aQYUig==: --dhchap-ctrl-secret DHHC-1:03:ZjNmNzNjMWMwYjQzMzQ5ZWNmZDIyODA2ZThmYjRiNmM3ZmM5NjhkNGRmMDQ5ODljYTYzMmU2YWMwZTk2MWIyN6QHuhc=: 00:17:26.671 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.671 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.671 07:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.671 07:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.671 07:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.671 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.671 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:26.671 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:26.671 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:26.671 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.671 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:26.671 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:26.671 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:26.671 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.671 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.671 07:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.671 07:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.671 07:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.671 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.671 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.929 00:17:26.929 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.929 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.929 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.188 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.188 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.188 07:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.188 07:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.188 07:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.188 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.188 { 00:17:27.188 "cntlid": 51, 00:17:27.188 "qid": 0, 00:17:27.188 "state": "enabled", 00:17:27.188 "thread": "nvmf_tgt_poll_group_000", 00:17:27.188 "listen_address": { 00:17:27.188 "trtype": "TCP", 00:17:27.188 "adrfam": "IPv4", 00:17:27.188 "traddr": "10.0.0.2", 00:17:27.188 "trsvcid": "4420" 00:17:27.188 }, 00:17:27.188 "peer_address": { 00:17:27.188 "trtype": "TCP", 00:17:27.188 "adrfam": "IPv4", 00:17:27.188 "traddr": "10.0.0.1", 00:17:27.188 "trsvcid": "56996" 00:17:27.188 }, 00:17:27.188 "auth": { 00:17:27.188 "state": "completed", 00:17:27.188 "digest": "sha384", 00:17:27.188 "dhgroup": "null" 00:17:27.188 } 00:17:27.188 } 00:17:27.188 ]' 00:17:27.188 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.188 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.188 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.188 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:27.188 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.188 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.188 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.188 07:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.446 07:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ODMyY2E1YjQ3OTkzYTk3NmVmMmRmNTAyYmQ0OWY1MTY0Meom: --dhchap-ctrl-secret DHHC-1:02:OTEzMzJiZWRjZjkxYmU5OTQxODQxMzg3MGYyYmI2MTQ4MGMxYzM2MTQ2MzVkZjAwwdfxQQ==: 00:17:28.064 07:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.064 07:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.064 07:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.064 07:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.064 07:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.064 07:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.064 07:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:28.064 07:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:28.322 07:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:28.322 07:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.322 07:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:28.322 07:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:28.322 07:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:28.322 07:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.322 07:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.322 07:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.322 07:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.322 07:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.322 07:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.322 07:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.581 00:17:28.581 07:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.581 07:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.581 07:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.581 07:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.581 07:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.581 07:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.581 07:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.839 07:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.839 07:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.839 { 00:17:28.839 "cntlid": 53, 00:17:28.839 "qid": 0, 00:17:28.839 "state": "enabled", 00:17:28.839 "thread": "nvmf_tgt_poll_group_000", 00:17:28.839 "listen_address": { 00:17:28.839 "trtype": "TCP", 00:17:28.839 "adrfam": "IPv4", 00:17:28.839 "traddr": "10.0.0.2", 00:17:28.839 "trsvcid": "4420" 00:17:28.839 }, 00:17:28.839 "peer_address": { 00:17:28.839 "trtype": "TCP", 00:17:28.839 "adrfam": "IPv4", 00:17:28.839 "traddr": "10.0.0.1", 00:17:28.839 "trsvcid": "57030" 00:17:28.839 }, 00:17:28.839 "auth": { 00:17:28.839 "state": "completed", 00:17:28.839 "digest": "sha384", 00:17:28.839 "dhgroup": "null" 00:17:28.839 } 00:17:28.839 } 00:17:28.839 ]' 00:17:28.839 07:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.839 07:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.839 07:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.839 07:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:28.839 07:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.839 07:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.839 07:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.839 07:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.097 07:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YzBiZGExZjI5NGM1MTlhNGI2NDY1NWJiOGNiODI4YjBlNjk3Y2UzNzgyMDNhOTI0ZifQMg==: --dhchap-ctrl-secret DHHC-1:01:NzM5OTQwNjcwOWYzZTQxMDU2ZTg1YTNjYzU3NTM5YWQCMAyS: 00:17:29.665 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.665 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:29.665 07:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.665 07:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.665 07:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.665 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.665 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:29.665 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:29.665 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:29.665 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.665 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:29.665 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:29.665 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:29.665 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.665 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:29.665 07:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.665 07:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.665 07:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.665 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.665 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.923 00:17:29.923 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.923 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.923 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.182 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.182 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.182 07:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.182 07:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.182 07:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.182 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.182 { 00:17:30.182 "cntlid": 55, 00:17:30.182 "qid": 0, 00:17:30.182 "state": "enabled", 00:17:30.182 "thread": "nvmf_tgt_poll_group_000", 00:17:30.182 "listen_address": { 00:17:30.182 "trtype": "TCP", 00:17:30.182 "adrfam": "IPv4", 00:17:30.182 "traddr": "10.0.0.2", 00:17:30.182 "trsvcid": "4420" 00:17:30.182 }, 00:17:30.182 "peer_address": { 00:17:30.182 "trtype": "TCP", 00:17:30.182 "adrfam": "IPv4", 00:17:30.182 "traddr": "10.0.0.1", 00:17:30.182 "trsvcid": "57070" 00:17:30.182 }, 00:17:30.182 "auth": { 00:17:30.182 "state": "completed", 00:17:30.182 "digest": "sha384", 00:17:30.182 "dhgroup": "null" 00:17:30.182 } 00:17:30.182 } 00:17:30.182 ]' 00:17:30.182 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.182 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.182 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.182 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:30.182 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.441 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.441 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.441 07:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.441 07:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVkYmViZGM2Y2JjZDhiMTNlOTYwOGY0ODYzMzI1Zjg4OTY2YzU5M2EwODhmNmQ4Yzk5Mzk2NDhiMmNlYWJiZemuwD4=: 00:17:31.009 07:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.009 07:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:31.009 07:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.009 07:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.009 07:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.009 07:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.009 07:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.009 07:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:31.009 07:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:31.268 07:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:31.268 07:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.268 07:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:31.268 07:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:31.268 07:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:31.268 07:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.268 07:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.268 07:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.268 07:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.268 07:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.268 07:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.268 07:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.528 00:17:31.528 07:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.528 07:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.528 07:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.813 07:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.813 07:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.813 07:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.813 07:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.813 07:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.813 07:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.813 { 00:17:31.813 "cntlid": 57, 00:17:31.813 "qid": 0, 00:17:31.813 "state": "enabled", 00:17:31.813 "thread": "nvmf_tgt_poll_group_000", 00:17:31.813 "listen_address": { 00:17:31.813 "trtype": "TCP", 00:17:31.813 "adrfam": "IPv4", 00:17:31.813 "traddr": "10.0.0.2", 00:17:31.813 "trsvcid": "4420" 00:17:31.813 }, 00:17:31.813 "peer_address": { 00:17:31.813 "trtype": "TCP", 00:17:31.813 "adrfam": "IPv4", 00:17:31.813 "traddr": "10.0.0.1", 00:17:31.813 "trsvcid": "57088" 00:17:31.813 }, 00:17:31.813 "auth": { 00:17:31.813 "state": "completed", 00:17:31.813 "digest": "sha384", 00:17:31.813 "dhgroup": "ffdhe2048" 00:17:31.813 } 00:17:31.813 } 00:17:31.813 ]' 00:17:31.813 07:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.813 07:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.813 07:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.813 07:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:31.813 07:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.813 07:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.813 07:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.813 07:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.071 07:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjVkNTRkNTY3Y2VlZmM2Njg4YTJhNzNkOTI5NTA1NDc2NjgwMDU4NDkyNmNjMTg3aQYUig==: --dhchap-ctrl-secret DHHC-1:03:ZjNmNzNjMWMwYjQzMzQ5ZWNmZDIyODA2ZThmYjRiNmM3ZmM5NjhkNGRmMDQ5ODljYTYzMmU2YWMwZTk2MWIyN6QHuhc=: 00:17:32.640 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.640 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:32.640 07:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.640 07:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.640 07:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.640 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.640 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:32.640 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:32.640 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:32.640 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.640 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:32.640 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:32.640 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:32.640 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.640 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.640 07:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.640 07:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.640 07:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.899 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.899 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.899 00:17:32.899 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.899 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.899 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.158 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.158 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.158 07:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.158 07:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.158 07:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.158 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.158 { 00:17:33.158 "cntlid": 59, 00:17:33.158 "qid": 0, 00:17:33.158 "state": "enabled", 00:17:33.158 "thread": "nvmf_tgt_poll_group_000", 00:17:33.158 "listen_address": { 00:17:33.158 "trtype": "TCP", 00:17:33.158 "adrfam": "IPv4", 00:17:33.158 "traddr": "10.0.0.2", 00:17:33.158 "trsvcid": "4420" 00:17:33.158 }, 00:17:33.158 "peer_address": { 00:17:33.158 "trtype": "TCP", 00:17:33.158 "adrfam": "IPv4", 00:17:33.158 "traddr": "10.0.0.1", 00:17:33.158 "trsvcid": "39674" 00:17:33.158 }, 00:17:33.158 "auth": { 00:17:33.158 "state": "completed", 00:17:33.158 "digest": "sha384", 00:17:33.158 "dhgroup": "ffdhe2048" 00:17:33.158 } 00:17:33.158 } 00:17:33.158 ]' 00:17:33.158 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.158 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.158 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.416 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:33.416 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.416 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.416 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.416 07:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.416 07:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ODMyY2E1YjQ3OTkzYTk3NmVmMmRmNTAyYmQ0OWY1MTY0Meom: --dhchap-ctrl-secret DHHC-1:02:OTEzMzJiZWRjZjkxYmU5OTQxODQxMzg3MGYyYmI2MTQ4MGMxYzM2MTQ2MzVkZjAwwdfxQQ==: 00:17:33.982 07:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.982 07:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:33.982 07:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.982 07:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.982 07:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.982 07:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.982 07:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:33.983 07:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:34.241 07:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:34.241 07:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.241 07:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:34.241 07:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:34.241 07:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:34.241 07:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.241 07:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.241 07:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.241 07:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.241 07:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.241 07:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.241 07:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.500 00:17:34.500 07:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.500 07:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.500 07:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.759 07:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.759 07:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.759 07:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.759 07:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.759 07:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.759 07:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.759 { 00:17:34.759 "cntlid": 61, 00:17:34.759 "qid": 0, 00:17:34.759 "state": "enabled", 00:17:34.759 "thread": "nvmf_tgt_poll_group_000", 00:17:34.759 "listen_address": { 00:17:34.759 "trtype": "TCP", 00:17:34.759 "adrfam": "IPv4", 00:17:34.759 "traddr": "10.0.0.2", 00:17:34.759 "trsvcid": "4420" 00:17:34.759 }, 00:17:34.759 "peer_address": { 00:17:34.759 "trtype": "TCP", 00:17:34.759 "adrfam": "IPv4", 00:17:34.759 "traddr": "10.0.0.1", 00:17:34.759 "trsvcid": "39698" 00:17:34.759 }, 00:17:34.759 "auth": { 00:17:34.759 "state": "completed", 00:17:34.759 "digest": "sha384", 00:17:34.759 "dhgroup": "ffdhe2048" 00:17:34.759 } 00:17:34.759 } 00:17:34.759 ]' 00:17:34.759 07:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.759 07:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.759 07:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.759 07:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:34.759 07:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.759 07:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.759 07:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.759 07:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.017 07:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YzBiZGExZjI5NGM1MTlhNGI2NDY1NWJiOGNiODI4YjBlNjk3Y2UzNzgyMDNhOTI0ZifQMg==: --dhchap-ctrl-secret DHHC-1:01:NzM5OTQwNjcwOWYzZTQxMDU2ZTg1YTNjYzU3NTM5YWQCMAyS: 00:17:35.585 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.585 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:35.585 07:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.585 07:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.585 07:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.585 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.585 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:35.585 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:35.844 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:35.844 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.844 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:35.844 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:35.844 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:35.844 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.844 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:35.844 07:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.844 07:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.844 07:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.844 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.844 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:36.103 00:17:36.103 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.103 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.103 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.103 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.103 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.103 07:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.103 07:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.361 07:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.361 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.361 { 00:17:36.361 "cntlid": 63, 00:17:36.361 "qid": 0, 00:17:36.361 "state": "enabled", 00:17:36.361 "thread": "nvmf_tgt_poll_group_000", 00:17:36.361 "listen_address": { 00:17:36.361 "trtype": "TCP", 00:17:36.361 "adrfam": "IPv4", 00:17:36.361 "traddr": "10.0.0.2", 00:17:36.361 "trsvcid": "4420" 00:17:36.361 }, 00:17:36.361 "peer_address": { 00:17:36.361 "trtype": "TCP", 00:17:36.361 "adrfam": "IPv4", 00:17:36.362 "traddr": "10.0.0.1", 00:17:36.362 "trsvcid": "39710" 00:17:36.362 }, 00:17:36.362 "auth": { 00:17:36.362 "state": "completed", 00:17:36.362 "digest": "sha384", 00:17:36.362 "dhgroup": "ffdhe2048" 00:17:36.362 } 00:17:36.362 } 00:17:36.362 ]' 00:17:36.362 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.362 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.362 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.362 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:36.362 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.362 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.362 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.362 07:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.620 07:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVkYmViZGM2Y2JjZDhiMTNlOTYwOGY0ODYzMzI1Zjg4OTY2YzU5M2EwODhmNmQ4Yzk5Mzk2NDhiMmNlYWJiZemuwD4=: 00:17:37.188 07:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.188 07:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.188 07:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.188 07:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.188 07:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.188 07:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.188 07:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.188 07:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:37.188 07:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:37.188 07:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:37.188 07:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.188 07:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:37.188 07:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:37.188 07:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:37.188 07:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.188 07:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.188 07:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.188 07:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.447 07:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.447 07:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.447 07:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.447 00:17:37.447 07:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.448 07:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.448 07:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.707 07:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.707 07:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.707 07:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.707 07:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.707 07:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.707 07:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.707 { 00:17:37.707 "cntlid": 65, 00:17:37.707 "qid": 0, 00:17:37.707 "state": "enabled", 00:17:37.707 "thread": "nvmf_tgt_poll_group_000", 00:17:37.707 "listen_address": { 00:17:37.707 "trtype": "TCP", 00:17:37.707 "adrfam": "IPv4", 00:17:37.707 "traddr": "10.0.0.2", 00:17:37.707 "trsvcid": "4420" 00:17:37.707 }, 00:17:37.707 "peer_address": { 00:17:37.707 "trtype": "TCP", 00:17:37.707 "adrfam": "IPv4", 00:17:37.707 "traddr": "10.0.0.1", 00:17:37.707 "trsvcid": "39734" 00:17:37.707 }, 00:17:37.707 "auth": { 00:17:37.707 "state": "completed", 00:17:37.707 "digest": "sha384", 00:17:37.707 "dhgroup": "ffdhe3072" 00:17:37.707 } 00:17:37.707 } 00:17:37.707 ]' 00:17:37.707 07:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.707 07:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.707 07:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.967 07:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:37.967 07:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.967 07:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.967 07:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.967 07:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.967 07:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjVkNTRkNTY3Y2VlZmM2Njg4YTJhNzNkOTI5NTA1NDc2NjgwMDU4NDkyNmNjMTg3aQYUig==: --dhchap-ctrl-secret DHHC-1:03:ZjNmNzNjMWMwYjQzMzQ5ZWNmZDIyODA2ZThmYjRiNmM3ZmM5NjhkNGRmMDQ5ODljYTYzMmU2YWMwZTk2MWIyN6QHuhc=: 00:17:38.535 07:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.535 07:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:38.535 07:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.535 07:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.535 07:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.535 07:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.535 07:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:38.535 07:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:38.793 07:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:38.793 07:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.793 07:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:38.793 07:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:38.793 07:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:38.793 07:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.793 07:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.793 07:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.793 07:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.793 07:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.793 07:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.793 07:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.051 00:17:39.051 07:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.051 07:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.051 07:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.309 07:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.309 07:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.309 07:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.309 07:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.309 07:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.309 07:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.309 { 00:17:39.309 "cntlid": 67, 00:17:39.309 "qid": 0, 00:17:39.309 "state": "enabled", 00:17:39.309 "thread": "nvmf_tgt_poll_group_000", 00:17:39.309 "listen_address": { 00:17:39.309 "trtype": "TCP", 00:17:39.309 "adrfam": "IPv4", 00:17:39.309 "traddr": "10.0.0.2", 00:17:39.309 "trsvcid": "4420" 00:17:39.309 }, 00:17:39.309 "peer_address": { 00:17:39.309 "trtype": "TCP", 00:17:39.309 "adrfam": "IPv4", 00:17:39.309 "traddr": "10.0.0.1", 00:17:39.309 "trsvcid": "39764" 00:17:39.309 }, 00:17:39.309 "auth": { 00:17:39.309 "state": "completed", 00:17:39.309 "digest": "sha384", 00:17:39.309 "dhgroup": "ffdhe3072" 00:17:39.309 } 00:17:39.309 } 00:17:39.309 ]' 00:17:39.309 07:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.309 07:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.309 07:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.309 07:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:39.309 07:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.309 07:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.309 07:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.309 07:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.567 07:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ODMyY2E1YjQ3OTkzYTk3NmVmMmRmNTAyYmQ0OWY1MTY0Meom: --dhchap-ctrl-secret DHHC-1:02:OTEzMzJiZWRjZjkxYmU5OTQxODQxMzg3MGYyYmI2MTQ4MGMxYzM2MTQ2MzVkZjAwwdfxQQ==: 00:17:40.134 07:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.134 07:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:40.134 07:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.134 07:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.134 07:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.134 07:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.134 07:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:40.134 07:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:40.393 07:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:40.393 07:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.393 07:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:40.393 07:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:40.393 07:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:40.393 07:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.393 07:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.393 07:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.393 07:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.393 07:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.393 07:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.393 07:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.652 00:17:40.652 07:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.652 07:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.652 07:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.911 07:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.911 07:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.911 07:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.911 07:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.911 07:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.911 07:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.911 { 00:17:40.911 "cntlid": 69, 00:17:40.911 "qid": 0, 00:17:40.911 "state": "enabled", 00:17:40.911 "thread": "nvmf_tgt_poll_group_000", 00:17:40.911 "listen_address": { 00:17:40.911 "trtype": "TCP", 00:17:40.911 "adrfam": "IPv4", 00:17:40.911 "traddr": "10.0.0.2", 00:17:40.911 "trsvcid": "4420" 00:17:40.911 }, 00:17:40.911 "peer_address": { 00:17:40.911 "trtype": "TCP", 00:17:40.911 "adrfam": "IPv4", 00:17:40.911 "traddr": "10.0.0.1", 00:17:40.911 "trsvcid": "39790" 00:17:40.911 }, 00:17:40.911 "auth": { 00:17:40.911 "state": "completed", 00:17:40.911 "digest": "sha384", 00:17:40.911 "dhgroup": "ffdhe3072" 00:17:40.911 } 00:17:40.911 } 00:17:40.911 ]' 00:17:40.911 07:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.911 07:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.911 07:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.911 07:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:40.911 07:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.911 07:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.911 07:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.911 07:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.170 07:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YzBiZGExZjI5NGM1MTlhNGI2NDY1NWJiOGNiODI4YjBlNjk3Y2UzNzgyMDNhOTI0ZifQMg==: --dhchap-ctrl-secret DHHC-1:01:NzM5OTQwNjcwOWYzZTQxMDU2ZTg1YTNjYzU3NTM5YWQCMAyS: 00:17:41.739 07:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.739 07:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:41.739 07:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.739 07:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.739 07:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.739 07:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.739 07:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:41.739 07:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:41.739 07:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:41.739 07:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.739 07:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:41.739 07:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:41.739 07:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:41.739 07:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.739 07:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:41.739 07:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.739 07:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.739 07:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.739 07:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.739 07:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.998 00:17:42.257 07:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.257 07:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.257 07:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.257 07:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.257 07:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.257 07:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.257 07:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.257 07:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.257 07:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.257 { 00:17:42.257 "cntlid": 71, 00:17:42.257 "qid": 0, 00:17:42.257 "state": "enabled", 00:17:42.257 "thread": "nvmf_tgt_poll_group_000", 00:17:42.257 "listen_address": { 00:17:42.257 "trtype": "TCP", 00:17:42.257 "adrfam": "IPv4", 00:17:42.257 "traddr": "10.0.0.2", 00:17:42.257 "trsvcid": "4420" 00:17:42.257 }, 00:17:42.257 "peer_address": { 00:17:42.257 "trtype": "TCP", 00:17:42.257 "adrfam": "IPv4", 00:17:42.257 "traddr": "10.0.0.1", 00:17:42.257 "trsvcid": "39812" 00:17:42.257 }, 00:17:42.257 "auth": { 00:17:42.257 "state": "completed", 00:17:42.257 "digest": "sha384", 00:17:42.257 "dhgroup": "ffdhe3072" 00:17:42.257 } 00:17:42.257 } 00:17:42.257 ]' 00:17:42.257 07:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.257 07:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:42.257 07:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.515 07:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:42.515 07:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.515 07:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.515 07:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.515 07:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.515 07:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVkYmViZGM2Y2JjZDhiMTNlOTYwOGY0ODYzMzI1Zjg4OTY2YzU5M2EwODhmNmQ4Yzk5Mzk2NDhiMmNlYWJiZemuwD4=: 00:17:43.082 07:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.083 07:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:43.083 07:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.083 07:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.083 07:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.083 07:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.083 07:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.343 07:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:43.343 07:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:43.343 07:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:43.343 07:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.343 07:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:43.343 07:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:43.343 07:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:43.343 07:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.343 07:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.343 07:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.343 07:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.343 07:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.343 07:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.343 07:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.602 00:17:43.602 07:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.602 07:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.602 07:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.861 07:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.861 07:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.861 07:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.861 07:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.861 07:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.861 07:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.861 { 00:17:43.861 "cntlid": 73, 00:17:43.861 "qid": 0, 00:17:43.861 "state": "enabled", 00:17:43.861 "thread": "nvmf_tgt_poll_group_000", 00:17:43.861 "listen_address": { 00:17:43.861 "trtype": "TCP", 00:17:43.861 "adrfam": "IPv4", 00:17:43.861 "traddr": "10.0.0.2", 00:17:43.861 "trsvcid": "4420" 00:17:43.861 }, 00:17:43.861 "peer_address": { 00:17:43.861 "trtype": "TCP", 00:17:43.861 "adrfam": "IPv4", 00:17:43.861 "traddr": "10.0.0.1", 00:17:43.861 "trsvcid": "35772" 00:17:43.861 }, 00:17:43.861 "auth": { 00:17:43.861 "state": "completed", 00:17:43.861 "digest": "sha384", 00:17:43.861 "dhgroup": "ffdhe4096" 00:17:43.861 } 00:17:43.861 } 00:17:43.861 ]' 00:17:43.861 07:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.861 07:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.861 07:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.861 07:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:43.861 07:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.119 07:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.120 07:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.120 07:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.120 07:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjVkNTRkNTY3Y2VlZmM2Njg4YTJhNzNkOTI5NTA1NDc2NjgwMDU4NDkyNmNjMTg3aQYUig==: --dhchap-ctrl-secret DHHC-1:03:ZjNmNzNjMWMwYjQzMzQ5ZWNmZDIyODA2ZThmYjRiNmM3ZmM5NjhkNGRmMDQ5ODljYTYzMmU2YWMwZTk2MWIyN6QHuhc=: 00:17:44.687 07:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.687 07:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:44.687 07:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.687 07:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.687 07:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.687 07:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.687 07:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:44.687 07:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:44.946 07:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:44.946 07:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.946 07:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:44.946 07:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:44.946 07:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:44.946 07:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.946 07:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.946 07:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.946 07:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.946 07:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.946 07:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.946 07:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.205 00:17:45.205 07:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.205 07:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.205 07:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.463 07:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.463 07:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.463 07:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.463 07:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.463 07:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.463 07:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.463 { 00:17:45.463 "cntlid": 75, 00:17:45.463 "qid": 0, 00:17:45.463 "state": "enabled", 00:17:45.463 "thread": "nvmf_tgt_poll_group_000", 00:17:45.463 "listen_address": { 00:17:45.463 "trtype": "TCP", 00:17:45.463 "adrfam": "IPv4", 00:17:45.463 "traddr": "10.0.0.2", 00:17:45.463 "trsvcid": "4420" 00:17:45.463 }, 00:17:45.463 "peer_address": { 00:17:45.463 "trtype": "TCP", 00:17:45.463 "adrfam": "IPv4", 00:17:45.463 "traddr": "10.0.0.1", 00:17:45.463 "trsvcid": "35790" 00:17:45.463 }, 00:17:45.463 "auth": { 00:17:45.463 "state": "completed", 00:17:45.463 "digest": "sha384", 00:17:45.463 "dhgroup": "ffdhe4096" 00:17:45.463 } 00:17:45.463 } 00:17:45.463 ]' 00:17:45.463 07:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.463 07:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:45.463 07:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.463 07:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:45.463 07:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.463 07:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.463 07:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.463 07:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.722 07:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ODMyY2E1YjQ3OTkzYTk3NmVmMmRmNTAyYmQ0OWY1MTY0Meom: --dhchap-ctrl-secret DHHC-1:02:OTEzMzJiZWRjZjkxYmU5OTQxODQxMzg3MGYyYmI2MTQ4MGMxYzM2MTQ2MzVkZjAwwdfxQQ==: 00:17:46.290 07:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.290 07:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:46.290 07:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.290 07:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.290 07:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.290 07:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.290 07:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:46.290 07:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:46.550 07:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:46.550 07:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.550 07:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:46.550 07:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:46.550 07:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:46.550 07:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.550 07:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.550 07:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.550 07:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.550 07:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.550 07:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.550 07:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.809 00:17:46.809 07:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.809 07:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.809 07:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.069 07:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.069 07:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.069 07:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.069 07:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.069 07:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.069 07:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.069 { 00:17:47.069 "cntlid": 77, 00:17:47.069 "qid": 0, 00:17:47.069 "state": "enabled", 00:17:47.069 "thread": "nvmf_tgt_poll_group_000", 00:17:47.069 "listen_address": { 00:17:47.069 "trtype": "TCP", 00:17:47.069 "adrfam": "IPv4", 00:17:47.069 "traddr": "10.0.0.2", 00:17:47.069 "trsvcid": "4420" 00:17:47.069 }, 00:17:47.069 "peer_address": { 00:17:47.069 "trtype": "TCP", 00:17:47.069 "adrfam": "IPv4", 00:17:47.069 "traddr": "10.0.0.1", 00:17:47.069 "trsvcid": "35836" 00:17:47.069 }, 00:17:47.069 "auth": { 00:17:47.069 "state": "completed", 00:17:47.069 "digest": "sha384", 00:17:47.069 "dhgroup": "ffdhe4096" 00:17:47.069 } 00:17:47.069 } 00:17:47.069 ]' 00:17:47.069 07:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.069 07:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:47.069 07:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.069 07:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:47.069 07:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.069 07:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.069 07:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.070 07:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.329 07:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YzBiZGExZjI5NGM1MTlhNGI2NDY1NWJiOGNiODI4YjBlNjk3Y2UzNzgyMDNhOTI0ZifQMg==: --dhchap-ctrl-secret DHHC-1:01:NzM5OTQwNjcwOWYzZTQxMDU2ZTg1YTNjYzU3NTM5YWQCMAyS: 00:17:47.896 07:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.896 07:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:47.896 07:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.896 07:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.896 07:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.896 07:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.896 07:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:47.896 07:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:48.156 07:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:48.156 07:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.156 07:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:48.156 07:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:48.156 07:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:48.156 07:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.156 07:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:48.156 07:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.156 07:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.156 07:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.156 07:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.156 07:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.443 00:17:48.443 07:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.443 07:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.443 07:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.443 07:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.443 07:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.443 07:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.443 07:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.443 07:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.443 07:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.443 { 00:17:48.443 "cntlid": 79, 00:17:48.443 "qid": 0, 00:17:48.443 "state": "enabled", 00:17:48.443 "thread": "nvmf_tgt_poll_group_000", 00:17:48.443 "listen_address": { 00:17:48.443 "trtype": "TCP", 00:17:48.443 "adrfam": "IPv4", 00:17:48.443 "traddr": "10.0.0.2", 00:17:48.443 "trsvcid": "4420" 00:17:48.443 }, 00:17:48.443 "peer_address": { 00:17:48.443 "trtype": "TCP", 00:17:48.443 "adrfam": "IPv4", 00:17:48.443 "traddr": "10.0.0.1", 00:17:48.443 "trsvcid": "35870" 00:17:48.443 }, 00:17:48.443 "auth": { 00:17:48.443 "state": "completed", 00:17:48.443 "digest": "sha384", 00:17:48.443 "dhgroup": "ffdhe4096" 00:17:48.443 } 00:17:48.443 } 00:17:48.443 ]' 00:17:48.443 07:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.701 07:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:48.701 07:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.701 07:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:48.701 07:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.701 07:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.701 07:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.701 07:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.959 07:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVkYmViZGM2Y2JjZDhiMTNlOTYwOGY0ODYzMzI1Zjg4OTY2YzU5M2EwODhmNmQ4Yzk5Mzk2NDhiMmNlYWJiZemuwD4=: 00:17:49.527 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.527 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:49.527 07:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.527 07:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.527 07:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.527 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.527 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.527 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:49.527 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:49.527 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:49.527 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.527 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:49.527 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:49.527 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:49.527 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.527 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.527 07:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.527 07:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.527 07:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.527 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.527 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.096 00:17:50.096 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.096 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.096 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.096 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.096 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.097 07:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.097 07:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.097 07:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.097 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.097 { 00:17:50.097 "cntlid": 81, 00:17:50.097 "qid": 0, 00:17:50.097 "state": "enabled", 00:17:50.097 "thread": "nvmf_tgt_poll_group_000", 00:17:50.097 "listen_address": { 00:17:50.097 "trtype": "TCP", 00:17:50.097 "adrfam": "IPv4", 00:17:50.097 "traddr": "10.0.0.2", 00:17:50.097 "trsvcid": "4420" 00:17:50.097 }, 00:17:50.097 "peer_address": { 00:17:50.097 "trtype": "TCP", 00:17:50.097 "adrfam": "IPv4", 00:17:50.097 "traddr": "10.0.0.1", 00:17:50.097 "trsvcid": "35892" 00:17:50.097 }, 00:17:50.097 "auth": { 00:17:50.097 "state": "completed", 00:17:50.097 "digest": "sha384", 00:17:50.097 "dhgroup": "ffdhe6144" 00:17:50.097 } 00:17:50.097 } 00:17:50.097 ]' 00:17:50.097 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.356 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:50.356 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.356 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:50.356 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.356 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.356 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.356 07:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.616 07:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjVkNTRkNTY3Y2VlZmM2Njg4YTJhNzNkOTI5NTA1NDc2NjgwMDU4NDkyNmNjMTg3aQYUig==: --dhchap-ctrl-secret DHHC-1:03:ZjNmNzNjMWMwYjQzMzQ5ZWNmZDIyODA2ZThmYjRiNmM3ZmM5NjhkNGRmMDQ5ODljYTYzMmU2YWMwZTk2MWIyN6QHuhc=: 00:17:51.185 07:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.185 07:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:51.185 07:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.185 07:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.185 07:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.185 07:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.185 07:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:51.185 07:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:51.185 07:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:51.185 07:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.185 07:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:51.185 07:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:51.185 07:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:51.185 07:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.185 07:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.185 07:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.185 07:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.185 07:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.185 07:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.185 07:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.753 00:17:51.753 07:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.753 07:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.753 07:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.753 07:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.753 07:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.753 07:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.753 07:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.753 07:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.753 07:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.753 { 00:17:51.753 "cntlid": 83, 00:17:51.753 "qid": 0, 00:17:51.753 "state": "enabled", 00:17:51.753 "thread": "nvmf_tgt_poll_group_000", 00:17:51.753 "listen_address": { 00:17:51.753 "trtype": "TCP", 00:17:51.753 "adrfam": "IPv4", 00:17:51.753 "traddr": "10.0.0.2", 00:17:51.753 "trsvcid": "4420" 00:17:51.753 }, 00:17:51.753 "peer_address": { 00:17:51.753 "trtype": "TCP", 00:17:51.753 "adrfam": "IPv4", 00:17:51.753 "traddr": "10.0.0.1", 00:17:51.753 "trsvcid": "35914" 00:17:51.753 }, 00:17:51.753 "auth": { 00:17:51.753 "state": "completed", 00:17:51.753 "digest": "sha384", 00:17:51.753 "dhgroup": "ffdhe6144" 00:17:51.753 } 00:17:51.754 } 00:17:51.754 ]' 00:17:51.754 07:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.754 07:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:51.754 07:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.754 07:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:51.754 07:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.013 07:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.013 07:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.013 07:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.013 07:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ODMyY2E1YjQ3OTkzYTk3NmVmMmRmNTAyYmQ0OWY1MTY0Meom: --dhchap-ctrl-secret DHHC-1:02:OTEzMzJiZWRjZjkxYmU5OTQxODQxMzg3MGYyYmI2MTQ4MGMxYzM2MTQ2MzVkZjAwwdfxQQ==: 00:17:52.581 07:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.581 07:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:52.581 07:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.581 07:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.582 07:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.582 07:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.582 07:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:52.582 07:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:52.841 07:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:52.841 07:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.841 07:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:52.841 07:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:52.841 07:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:52.841 07:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.841 07:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.841 07:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.841 07:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.841 07:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.841 07:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.841 07:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.100 00:17:53.100 07:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.100 07:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.100 07:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.359 07:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.359 07:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.359 07:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.359 07:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.359 07:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.359 07:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.359 { 00:17:53.359 "cntlid": 85, 00:17:53.359 "qid": 0, 00:17:53.359 "state": "enabled", 00:17:53.359 "thread": "nvmf_tgt_poll_group_000", 00:17:53.359 "listen_address": { 00:17:53.359 "trtype": "TCP", 00:17:53.359 "adrfam": "IPv4", 00:17:53.359 "traddr": "10.0.0.2", 00:17:53.359 "trsvcid": "4420" 00:17:53.359 }, 00:17:53.359 "peer_address": { 00:17:53.359 "trtype": "TCP", 00:17:53.359 "adrfam": "IPv4", 00:17:53.359 "traddr": "10.0.0.1", 00:17:53.360 "trsvcid": "36518" 00:17:53.360 }, 00:17:53.360 "auth": { 00:17:53.360 "state": "completed", 00:17:53.360 "digest": "sha384", 00:17:53.360 "dhgroup": "ffdhe6144" 00:17:53.360 } 00:17:53.360 } 00:17:53.360 ]' 00:17:53.360 07:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.360 07:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:53.360 07:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.360 07:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:53.360 07:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.619 07:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.619 07:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.619 07:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.619 07:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YzBiZGExZjI5NGM1MTlhNGI2NDY1NWJiOGNiODI4YjBlNjk3Y2UzNzgyMDNhOTI0ZifQMg==: --dhchap-ctrl-secret DHHC-1:01:NzM5OTQwNjcwOWYzZTQxMDU2ZTg1YTNjYzU3NTM5YWQCMAyS: 00:17:54.187 07:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.187 07:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:54.187 07:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.187 07:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.187 07:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.187 07:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.187 07:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:54.187 07:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:54.446 07:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:54.446 07:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.446 07:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:54.446 07:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:54.446 07:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:54.446 07:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.446 07:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:54.446 07:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.446 07:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.446 07:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.446 07:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:54.446 07:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:54.705 00:17:54.705 07:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.705 07:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.705 07:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.964 07:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.964 07:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.964 07:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.964 07:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.964 07:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.964 07:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.964 { 00:17:54.964 "cntlid": 87, 00:17:54.964 "qid": 0, 00:17:54.964 "state": "enabled", 00:17:54.964 "thread": "nvmf_tgt_poll_group_000", 00:17:54.964 "listen_address": { 00:17:54.964 "trtype": "TCP", 00:17:54.964 "adrfam": "IPv4", 00:17:54.964 "traddr": "10.0.0.2", 00:17:54.964 "trsvcid": "4420" 00:17:54.964 }, 00:17:54.964 "peer_address": { 00:17:54.964 "trtype": "TCP", 00:17:54.964 "adrfam": "IPv4", 00:17:54.964 "traddr": "10.0.0.1", 00:17:54.964 "trsvcid": "36542" 00:17:54.964 }, 00:17:54.964 "auth": { 00:17:54.964 "state": "completed", 00:17:54.964 "digest": "sha384", 00:17:54.964 "dhgroup": "ffdhe6144" 00:17:54.964 } 00:17:54.964 } 00:17:54.964 ]' 00:17:54.964 07:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.964 07:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.964 07:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.964 07:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:54.964 07:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.223 07:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.223 07:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.223 07:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.223 07:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVkYmViZGM2Y2JjZDhiMTNlOTYwOGY0ODYzMzI1Zjg4OTY2YzU5M2EwODhmNmQ4Yzk5Mzk2NDhiMmNlYWJiZemuwD4=: 00:17:55.790 07:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.790 07:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:55.790 07:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.790 07:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.790 07:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.790 07:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.790 07:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.790 07:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:55.790 07:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:56.049 07:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:56.049 07:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.049 07:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:56.049 07:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:56.049 07:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:56.049 07:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.049 07:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.049 07:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.049 07:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.049 07:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.049 07:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.049 07:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.616 00:17:56.616 07:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.616 07:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.616 07:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.875 07:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.875 07:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.875 07:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.875 07:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.875 07:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.875 07:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.875 { 00:17:56.875 "cntlid": 89, 00:17:56.875 "qid": 0, 00:17:56.875 "state": "enabled", 00:17:56.875 "thread": "nvmf_tgt_poll_group_000", 00:17:56.875 "listen_address": { 00:17:56.875 "trtype": "TCP", 00:17:56.875 "adrfam": "IPv4", 00:17:56.875 "traddr": "10.0.0.2", 00:17:56.875 "trsvcid": "4420" 00:17:56.875 }, 00:17:56.875 "peer_address": { 00:17:56.875 "trtype": "TCP", 00:17:56.875 "adrfam": "IPv4", 00:17:56.875 "traddr": "10.0.0.1", 00:17:56.875 "trsvcid": "36568" 00:17:56.875 }, 00:17:56.875 "auth": { 00:17:56.875 "state": "completed", 00:17:56.875 "digest": "sha384", 00:17:56.875 "dhgroup": "ffdhe8192" 00:17:56.875 } 00:17:56.875 } 00:17:56.875 ]' 00:17:56.875 07:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.875 07:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.875 07:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.875 07:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:56.875 07:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.875 07:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.875 07:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.875 07:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.134 07:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjVkNTRkNTY3Y2VlZmM2Njg4YTJhNzNkOTI5NTA1NDc2NjgwMDU4NDkyNmNjMTg3aQYUig==: --dhchap-ctrl-secret DHHC-1:03:ZjNmNzNjMWMwYjQzMzQ5ZWNmZDIyODA2ZThmYjRiNmM3ZmM5NjhkNGRmMDQ5ODljYTYzMmU2YWMwZTk2MWIyN6QHuhc=: 00:17:57.702 07:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.702 07:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:57.702 07:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.702 07:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.702 07:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.702 07:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.702 07:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:57.702 07:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:57.702 07:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:57.702 07:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.702 07:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:57.702 07:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:57.702 07:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:57.702 07:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.702 07:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.702 07:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.702 07:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.702 07:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.702 07:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.702 07:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.270 00:17:58.270 07:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.270 07:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.270 07:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.528 07:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.528 07:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.528 07:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.528 07:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.528 07:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.528 07:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.528 { 00:17:58.528 "cntlid": 91, 00:17:58.528 "qid": 0, 00:17:58.528 "state": "enabled", 00:17:58.528 "thread": "nvmf_tgt_poll_group_000", 00:17:58.528 "listen_address": { 00:17:58.528 "trtype": "TCP", 00:17:58.528 "adrfam": "IPv4", 00:17:58.528 "traddr": "10.0.0.2", 00:17:58.528 "trsvcid": "4420" 00:17:58.528 }, 00:17:58.528 "peer_address": { 00:17:58.528 "trtype": "TCP", 00:17:58.528 "adrfam": "IPv4", 00:17:58.528 "traddr": "10.0.0.1", 00:17:58.528 "trsvcid": "36588" 00:17:58.529 }, 00:17:58.529 "auth": { 00:17:58.529 "state": "completed", 00:17:58.529 "digest": "sha384", 00:17:58.529 "dhgroup": "ffdhe8192" 00:17:58.529 } 00:17:58.529 } 00:17:58.529 ]' 00:17:58.529 07:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.529 07:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:58.529 07:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.529 07:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:58.529 07:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.529 07:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.529 07:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.529 07:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.787 07:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ODMyY2E1YjQ3OTkzYTk3NmVmMmRmNTAyYmQ0OWY1MTY0Meom: --dhchap-ctrl-secret DHHC-1:02:OTEzMzJiZWRjZjkxYmU5OTQxODQxMzg3MGYyYmI2MTQ4MGMxYzM2MTQ2MzVkZjAwwdfxQQ==: 00:17:59.353 07:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.353 07:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:59.353 07:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.353 07:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.353 07:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.353 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.353 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:59.353 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:59.611 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:59.611 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.611 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:59.611 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:59.611 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:59.611 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.611 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.611 07:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.611 07:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.611 07:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.611 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.611 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.179 00:18:00.179 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.179 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.179 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.179 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.179 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.179 07:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.179 07:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.179 07:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.179 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.179 { 00:18:00.179 "cntlid": 93, 00:18:00.179 "qid": 0, 00:18:00.179 "state": "enabled", 00:18:00.179 "thread": "nvmf_tgt_poll_group_000", 00:18:00.179 "listen_address": { 00:18:00.179 "trtype": "TCP", 00:18:00.179 "adrfam": "IPv4", 00:18:00.179 "traddr": "10.0.0.2", 00:18:00.179 "trsvcid": "4420" 00:18:00.179 }, 00:18:00.179 "peer_address": { 00:18:00.179 "trtype": "TCP", 00:18:00.179 "adrfam": "IPv4", 00:18:00.179 "traddr": "10.0.0.1", 00:18:00.179 "trsvcid": "36604" 00:18:00.179 }, 00:18:00.179 "auth": { 00:18:00.179 "state": "completed", 00:18:00.179 "digest": "sha384", 00:18:00.179 "dhgroup": "ffdhe8192" 00:18:00.179 } 00:18:00.179 } 00:18:00.179 ]' 00:18:00.179 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.179 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:00.179 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.437 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:00.437 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.437 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.437 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.437 07:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.437 07:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YzBiZGExZjI5NGM1MTlhNGI2NDY1NWJiOGNiODI4YjBlNjk3Y2UzNzgyMDNhOTI0ZifQMg==: --dhchap-ctrl-secret DHHC-1:01:NzM5OTQwNjcwOWYzZTQxMDU2ZTg1YTNjYzU3NTM5YWQCMAyS: 00:18:01.003 07:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.003 07:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:01.003 07:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.003 07:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.003 07:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.003 07:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.003 07:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:01.003 07:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:01.262 07:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:01.262 07:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.262 07:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:01.262 07:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:01.262 07:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:01.262 07:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.262 07:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:01.262 07:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.262 07:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.262 07:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.262 07:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:01.262 07:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:01.829 00:18:01.829 07:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.829 07:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.829 07:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.087 07:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.087 07:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.087 07:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.087 07:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.087 07:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.087 07:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.087 { 00:18:02.087 "cntlid": 95, 00:18:02.087 "qid": 0, 00:18:02.087 "state": "enabled", 00:18:02.087 "thread": "nvmf_tgt_poll_group_000", 00:18:02.087 "listen_address": { 00:18:02.087 "trtype": "TCP", 00:18:02.087 "adrfam": "IPv4", 00:18:02.087 "traddr": "10.0.0.2", 00:18:02.087 "trsvcid": "4420" 00:18:02.087 }, 00:18:02.087 "peer_address": { 00:18:02.087 "trtype": "TCP", 00:18:02.087 "adrfam": "IPv4", 00:18:02.087 "traddr": "10.0.0.1", 00:18:02.087 "trsvcid": "36630" 00:18:02.087 }, 00:18:02.087 "auth": { 00:18:02.087 "state": "completed", 00:18:02.087 "digest": "sha384", 00:18:02.087 "dhgroup": "ffdhe8192" 00:18:02.087 } 00:18:02.087 } 00:18:02.087 ]' 00:18:02.087 07:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.087 07:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.087 07:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.087 07:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:02.087 07:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.087 07:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.087 07:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.087 07:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.345 07:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVkYmViZGM2Y2JjZDhiMTNlOTYwOGY0ODYzMzI1Zjg4OTY2YzU5M2EwODhmNmQ4Yzk5Mzk2NDhiMmNlYWJiZemuwD4=: 00:18:02.912 07:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.912 07:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:02.912 07:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.912 07:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.912 07:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.912 07:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:02.912 07:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:02.912 07:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.912 07:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:02.912 07:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:03.171 07:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:03.171 07:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.171 07:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:03.171 07:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:03.171 07:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:03.171 07:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.171 07:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.171 07:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.171 07:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.171 07:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.171 07:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.171 07:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.171 00:18:03.430 07:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.430 07:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.430 07:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.430 07:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.430 07:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.430 07:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.430 07:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.430 07:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.430 07:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.430 { 00:18:03.430 "cntlid": 97, 00:18:03.430 "qid": 0, 00:18:03.430 "state": "enabled", 00:18:03.430 "thread": "nvmf_tgt_poll_group_000", 00:18:03.430 "listen_address": { 00:18:03.430 "trtype": "TCP", 00:18:03.430 "adrfam": "IPv4", 00:18:03.430 "traddr": "10.0.0.2", 00:18:03.430 "trsvcid": "4420" 00:18:03.430 }, 00:18:03.430 "peer_address": { 00:18:03.430 "trtype": "TCP", 00:18:03.430 "adrfam": "IPv4", 00:18:03.430 "traddr": "10.0.0.1", 00:18:03.430 "trsvcid": "33386" 00:18:03.430 }, 00:18:03.430 "auth": { 00:18:03.430 "state": "completed", 00:18:03.430 "digest": "sha512", 00:18:03.430 "dhgroup": "null" 00:18:03.430 } 00:18:03.430 } 00:18:03.430 ]' 00:18:03.430 07:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.430 07:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.430 07:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.689 07:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:03.689 07:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.689 07:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.689 07:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.689 07:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.947 07:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjVkNTRkNTY3Y2VlZmM2Njg4YTJhNzNkOTI5NTA1NDc2NjgwMDU4NDkyNmNjMTg3aQYUig==: --dhchap-ctrl-secret DHHC-1:03:ZjNmNzNjMWMwYjQzMzQ5ZWNmZDIyODA2ZThmYjRiNmM3ZmM5NjhkNGRmMDQ5ODljYTYzMmU2YWMwZTk2MWIyN6QHuhc=: 00:18:04.515 07:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.515 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:04.515 07:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.515 07:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.515 07:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.515 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.515 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:04.515 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:04.515 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:04.515 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.515 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:04.515 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:04.515 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:04.515 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.515 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.515 07:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.515 07:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.515 07:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.515 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.515 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.775 00:18:04.775 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.775 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.775 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.034 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.034 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.034 07:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.034 07:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.034 07:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.034 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.034 { 00:18:05.034 "cntlid": 99, 00:18:05.034 "qid": 0, 00:18:05.034 "state": "enabled", 00:18:05.034 "thread": "nvmf_tgt_poll_group_000", 00:18:05.034 "listen_address": { 00:18:05.034 "trtype": "TCP", 00:18:05.034 "adrfam": "IPv4", 00:18:05.034 "traddr": "10.0.0.2", 00:18:05.034 "trsvcid": "4420" 00:18:05.034 }, 00:18:05.034 "peer_address": { 00:18:05.034 "trtype": "TCP", 00:18:05.034 "adrfam": "IPv4", 00:18:05.034 "traddr": "10.0.0.1", 00:18:05.034 "trsvcid": "33426" 00:18:05.034 }, 00:18:05.034 "auth": { 00:18:05.034 "state": "completed", 00:18:05.034 "digest": "sha512", 00:18:05.034 "dhgroup": "null" 00:18:05.034 } 00:18:05.034 } 00:18:05.034 ]' 00:18:05.034 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.034 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.034 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.035 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:05.035 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.322 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.322 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.322 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.322 07:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ODMyY2E1YjQ3OTkzYTk3NmVmMmRmNTAyYmQ0OWY1MTY0Meom: --dhchap-ctrl-secret DHHC-1:02:OTEzMzJiZWRjZjkxYmU5OTQxODQxMzg3MGYyYmI2MTQ4MGMxYzM2MTQ2MzVkZjAwwdfxQQ==: 00:18:05.903 07:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.903 07:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:05.903 07:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.903 07:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.903 07:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.903 07:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.903 07:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:05.903 07:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:06.162 07:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:06.162 07:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.162 07:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:06.162 07:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:06.162 07:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:06.162 07:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.162 07:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.162 07:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.162 07:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.162 07:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.162 07:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.162 07:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.420 00:18:06.420 07:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.420 07:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.420 07:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.677 07:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.677 07:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.677 07:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.677 07:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.677 07:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.677 07:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.677 { 00:18:06.677 "cntlid": 101, 00:18:06.677 "qid": 0, 00:18:06.677 "state": "enabled", 00:18:06.677 "thread": "nvmf_tgt_poll_group_000", 00:18:06.677 "listen_address": { 00:18:06.677 "trtype": "TCP", 00:18:06.677 "adrfam": "IPv4", 00:18:06.677 "traddr": "10.0.0.2", 00:18:06.677 "trsvcid": "4420" 00:18:06.677 }, 00:18:06.677 "peer_address": { 00:18:06.677 "trtype": "TCP", 00:18:06.677 "adrfam": "IPv4", 00:18:06.677 "traddr": "10.0.0.1", 00:18:06.677 "trsvcid": "33454" 00:18:06.677 }, 00:18:06.677 "auth": { 00:18:06.677 "state": "completed", 00:18:06.677 "digest": "sha512", 00:18:06.677 "dhgroup": "null" 00:18:06.677 } 00:18:06.677 } 00:18:06.677 ]' 00:18:06.677 07:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.677 07:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.677 07:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.677 07:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:06.677 07:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.677 07:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.677 07:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.677 07:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.936 07:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YzBiZGExZjI5NGM1MTlhNGI2NDY1NWJiOGNiODI4YjBlNjk3Y2UzNzgyMDNhOTI0ZifQMg==: --dhchap-ctrl-secret DHHC-1:01:NzM5OTQwNjcwOWYzZTQxMDU2ZTg1YTNjYzU3NTM5YWQCMAyS: 00:18:07.503 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.503 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:07.503 07:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.503 07:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.503 07:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.503 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.503 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:07.503 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:07.503 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:18:07.503 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.503 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:07.503 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:07.503 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:07.503 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.503 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:07.503 07:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.503 07:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.503 07:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.503 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.503 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.761 00:18:07.761 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.761 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.761 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.020 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.020 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.020 07:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.020 07:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.020 07:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.020 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.020 { 00:18:08.020 "cntlid": 103, 00:18:08.020 "qid": 0, 00:18:08.020 "state": "enabled", 00:18:08.020 "thread": "nvmf_tgt_poll_group_000", 00:18:08.020 "listen_address": { 00:18:08.020 "trtype": "TCP", 00:18:08.020 "adrfam": "IPv4", 00:18:08.020 "traddr": "10.0.0.2", 00:18:08.020 "trsvcid": "4420" 00:18:08.020 }, 00:18:08.020 "peer_address": { 00:18:08.020 "trtype": "TCP", 00:18:08.020 "adrfam": "IPv4", 00:18:08.020 "traddr": "10.0.0.1", 00:18:08.020 "trsvcid": "33488" 00:18:08.020 }, 00:18:08.020 "auth": { 00:18:08.020 "state": "completed", 00:18:08.020 "digest": "sha512", 00:18:08.020 "dhgroup": "null" 00:18:08.020 } 00:18:08.020 } 00:18:08.020 ]' 00:18:08.020 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.020 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.021 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.021 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:08.021 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.280 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.280 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.280 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.280 07:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVkYmViZGM2Y2JjZDhiMTNlOTYwOGY0ODYzMzI1Zjg4OTY2YzU5M2EwODhmNmQ4Yzk5Mzk2NDhiMmNlYWJiZemuwD4=: 00:18:08.848 07:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.848 07:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:08.848 07:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.848 07:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.848 07:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.848 07:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:08.848 07:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.848 07:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:08.848 07:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:09.107 07:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:18:09.107 07:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.107 07:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:09.107 07:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:09.107 07:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:09.107 07:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.107 07:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.107 07:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.107 07:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.107 07:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.107 07:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.107 07:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.365 00:18:09.365 07:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.365 07:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.365 07:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.624 07:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.624 07:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.624 07:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.624 07:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.624 07:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.624 07:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.624 { 00:18:09.624 "cntlid": 105, 00:18:09.624 "qid": 0, 00:18:09.624 "state": "enabled", 00:18:09.624 "thread": "nvmf_tgt_poll_group_000", 00:18:09.625 "listen_address": { 00:18:09.625 "trtype": "TCP", 00:18:09.625 "adrfam": "IPv4", 00:18:09.625 "traddr": "10.0.0.2", 00:18:09.625 "trsvcid": "4420" 00:18:09.625 }, 00:18:09.625 "peer_address": { 00:18:09.625 "trtype": "TCP", 00:18:09.625 "adrfam": "IPv4", 00:18:09.625 "traddr": "10.0.0.1", 00:18:09.625 "trsvcid": "33508" 00:18:09.625 }, 00:18:09.625 "auth": { 00:18:09.625 "state": "completed", 00:18:09.625 "digest": "sha512", 00:18:09.625 "dhgroup": "ffdhe2048" 00:18:09.625 } 00:18:09.625 } 00:18:09.625 ]' 00:18:09.625 07:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.625 07:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.625 07:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.625 07:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:09.625 07:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.625 07:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.625 07:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.625 07:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.884 07:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjVkNTRkNTY3Y2VlZmM2Njg4YTJhNzNkOTI5NTA1NDc2NjgwMDU4NDkyNmNjMTg3aQYUig==: --dhchap-ctrl-secret DHHC-1:03:ZjNmNzNjMWMwYjQzMzQ5ZWNmZDIyODA2ZThmYjRiNmM3ZmM5NjhkNGRmMDQ5ODljYTYzMmU2YWMwZTk2MWIyN6QHuhc=: 00:18:10.452 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.452 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:10.452 07:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.452 07:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.452 07:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.452 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.452 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:10.452 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:10.711 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:18:10.711 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.711 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:10.711 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:10.711 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:10.711 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.711 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.711 07:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.711 07:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.711 07:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.711 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.711 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.969 00:18:10.969 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.969 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.969 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.969 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.969 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.969 07:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.969 07:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.228 07:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.228 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.228 { 00:18:11.228 "cntlid": 107, 00:18:11.228 "qid": 0, 00:18:11.228 "state": "enabled", 00:18:11.228 "thread": "nvmf_tgt_poll_group_000", 00:18:11.228 "listen_address": { 00:18:11.228 "trtype": "TCP", 00:18:11.228 "adrfam": "IPv4", 00:18:11.228 "traddr": "10.0.0.2", 00:18:11.228 "trsvcid": "4420" 00:18:11.228 }, 00:18:11.228 "peer_address": { 00:18:11.228 "trtype": "TCP", 00:18:11.228 "adrfam": "IPv4", 00:18:11.228 "traddr": "10.0.0.1", 00:18:11.228 "trsvcid": "33542" 00:18:11.228 }, 00:18:11.228 "auth": { 00:18:11.228 "state": "completed", 00:18:11.228 "digest": "sha512", 00:18:11.228 "dhgroup": "ffdhe2048" 00:18:11.228 } 00:18:11.228 } 00:18:11.228 ]' 00:18:11.228 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.228 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.228 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.228 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:11.228 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.228 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.228 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.228 07:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.487 07:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ODMyY2E1YjQ3OTkzYTk3NmVmMmRmNTAyYmQ0OWY1MTY0Meom: --dhchap-ctrl-secret DHHC-1:02:OTEzMzJiZWRjZjkxYmU5OTQxODQxMzg3MGYyYmI2MTQ4MGMxYzM2MTQ2MzVkZjAwwdfxQQ==: 00:18:12.056 07:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.056 07:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:12.056 07:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.056 07:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.056 07:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.056 07:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.056 07:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:12.056 07:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:12.056 07:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:18:12.056 07:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.056 07:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:12.056 07:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:12.056 07:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:12.056 07:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.056 07:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.056 07:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.056 07:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.056 07:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.056 07:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.056 07:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.315 00:18:12.315 07:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.315 07:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.315 07:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.575 07:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.575 07:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.575 07:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.575 07:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.575 07:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.575 07:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.575 { 00:18:12.575 "cntlid": 109, 00:18:12.575 "qid": 0, 00:18:12.575 "state": "enabled", 00:18:12.575 "thread": "nvmf_tgt_poll_group_000", 00:18:12.575 "listen_address": { 00:18:12.575 "trtype": "TCP", 00:18:12.575 "adrfam": "IPv4", 00:18:12.575 "traddr": "10.0.0.2", 00:18:12.575 "trsvcid": "4420" 00:18:12.575 }, 00:18:12.575 "peer_address": { 00:18:12.575 "trtype": "TCP", 00:18:12.575 "adrfam": "IPv4", 00:18:12.575 "traddr": "10.0.0.1", 00:18:12.575 "trsvcid": "47494" 00:18:12.575 }, 00:18:12.575 "auth": { 00:18:12.575 "state": "completed", 00:18:12.575 "digest": "sha512", 00:18:12.575 "dhgroup": "ffdhe2048" 00:18:12.575 } 00:18:12.575 } 00:18:12.575 ]' 00:18:12.575 07:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.575 07:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.575 07:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.575 07:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:12.575 07:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.835 07:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.835 07:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.835 07:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.835 07:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YzBiZGExZjI5NGM1MTlhNGI2NDY1NWJiOGNiODI4YjBlNjk3Y2UzNzgyMDNhOTI0ZifQMg==: --dhchap-ctrl-secret DHHC-1:01:NzM5OTQwNjcwOWYzZTQxMDU2ZTg1YTNjYzU3NTM5YWQCMAyS: 00:18:13.404 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.404 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:13.404 07:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.404 07:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.404 07:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.404 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.404 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:13.404 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:13.664 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:18:13.664 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.664 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:13.664 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:13.664 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:13.664 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.664 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:13.664 07:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.664 07:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.664 07:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.664 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.664 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.923 00:18:13.923 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.923 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.923 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.183 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.183 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.183 07:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.183 07:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.183 07:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.183 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.183 { 00:18:14.183 "cntlid": 111, 00:18:14.183 "qid": 0, 00:18:14.183 "state": "enabled", 00:18:14.183 "thread": "nvmf_tgt_poll_group_000", 00:18:14.183 "listen_address": { 00:18:14.183 "trtype": "TCP", 00:18:14.183 "adrfam": "IPv4", 00:18:14.183 "traddr": "10.0.0.2", 00:18:14.183 "trsvcid": "4420" 00:18:14.183 }, 00:18:14.183 "peer_address": { 00:18:14.183 "trtype": "TCP", 00:18:14.183 "adrfam": "IPv4", 00:18:14.183 "traddr": "10.0.0.1", 00:18:14.183 "trsvcid": "47516" 00:18:14.183 }, 00:18:14.183 "auth": { 00:18:14.183 "state": "completed", 00:18:14.183 "digest": "sha512", 00:18:14.183 "dhgroup": "ffdhe2048" 00:18:14.183 } 00:18:14.183 } 00:18:14.183 ]' 00:18:14.183 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.183 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.183 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.183 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:14.183 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.183 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.183 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.183 07:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.442 07:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVkYmViZGM2Y2JjZDhiMTNlOTYwOGY0ODYzMzI1Zjg4OTY2YzU5M2EwODhmNmQ4Yzk5Mzk2NDhiMmNlYWJiZemuwD4=: 00:18:15.011 07:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.011 07:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:15.011 07:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.011 07:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.011 07:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.011 07:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.011 07:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.011 07:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:15.011 07:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:15.270 07:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:18:15.270 07:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.270 07:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:15.270 07:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:15.270 07:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:15.270 07:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.270 07:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.270 07:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.270 07:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.270 07:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.270 07:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.270 07:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.270 00:18:15.270 07:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.270 07:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.270 07:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.529 07:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.529 07:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.529 07:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.529 07:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.529 07:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.529 07:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.529 { 00:18:15.529 "cntlid": 113, 00:18:15.529 "qid": 0, 00:18:15.529 "state": "enabled", 00:18:15.529 "thread": "nvmf_tgt_poll_group_000", 00:18:15.529 "listen_address": { 00:18:15.529 "trtype": "TCP", 00:18:15.529 "adrfam": "IPv4", 00:18:15.529 "traddr": "10.0.0.2", 00:18:15.529 "trsvcid": "4420" 00:18:15.529 }, 00:18:15.529 "peer_address": { 00:18:15.529 "trtype": "TCP", 00:18:15.529 "adrfam": "IPv4", 00:18:15.529 "traddr": "10.0.0.1", 00:18:15.529 "trsvcid": "47550" 00:18:15.529 }, 00:18:15.529 "auth": { 00:18:15.529 "state": "completed", 00:18:15.529 "digest": "sha512", 00:18:15.529 "dhgroup": "ffdhe3072" 00:18:15.529 } 00:18:15.529 } 00:18:15.529 ]' 00:18:15.529 07:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.529 07:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.529 07:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.789 07:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:15.789 07:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.789 07:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.789 07:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.789 07:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.789 07:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjVkNTRkNTY3Y2VlZmM2Njg4YTJhNzNkOTI5NTA1NDc2NjgwMDU4NDkyNmNjMTg3aQYUig==: --dhchap-ctrl-secret DHHC-1:03:ZjNmNzNjMWMwYjQzMzQ5ZWNmZDIyODA2ZThmYjRiNmM3ZmM5NjhkNGRmMDQ5ODljYTYzMmU2YWMwZTk2MWIyN6QHuhc=: 00:18:16.358 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.358 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:16.358 07:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.358 07:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.358 07:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.358 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.358 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:16.358 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:16.617 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:16.617 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.617 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:16.617 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:16.617 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:16.617 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.617 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.617 07:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.617 07:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.617 07:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.617 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.617 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.876 00:18:16.876 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.876 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.876 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.135 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.135 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.135 07:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.135 07:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.135 07:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.135 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.135 { 00:18:17.135 "cntlid": 115, 00:18:17.135 "qid": 0, 00:18:17.135 "state": "enabled", 00:18:17.135 "thread": "nvmf_tgt_poll_group_000", 00:18:17.135 "listen_address": { 00:18:17.135 "trtype": "TCP", 00:18:17.135 "adrfam": "IPv4", 00:18:17.135 "traddr": "10.0.0.2", 00:18:17.135 "trsvcid": "4420" 00:18:17.135 }, 00:18:17.135 "peer_address": { 00:18:17.135 "trtype": "TCP", 00:18:17.135 "adrfam": "IPv4", 00:18:17.135 "traddr": "10.0.0.1", 00:18:17.135 "trsvcid": "47564" 00:18:17.135 }, 00:18:17.135 "auth": { 00:18:17.135 "state": "completed", 00:18:17.135 "digest": "sha512", 00:18:17.135 "dhgroup": "ffdhe3072" 00:18:17.135 } 00:18:17.135 } 00:18:17.135 ]' 00:18:17.135 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.135 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.135 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.135 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:17.135 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.135 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.135 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.135 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.394 07:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ODMyY2E1YjQ3OTkzYTk3NmVmMmRmNTAyYmQ0OWY1MTY0Meom: --dhchap-ctrl-secret DHHC-1:02:OTEzMzJiZWRjZjkxYmU5OTQxODQxMzg3MGYyYmI2MTQ4MGMxYzM2MTQ2MzVkZjAwwdfxQQ==: 00:18:17.963 07:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.963 07:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:17.963 07:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.963 07:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.963 07:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.963 07:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.963 07:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:17.963 07:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:18.223 07:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:18.223 07:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.223 07:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:18.223 07:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:18.223 07:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:18.223 07:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.223 07:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.223 07:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.223 07:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.223 07:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.223 07:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.223 07:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.482 00:18:18.482 07:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.482 07:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.482 07:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.482 07:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.482 07:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.482 07:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.482 07:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.482 07:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.482 07:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.482 { 00:18:18.482 "cntlid": 117, 00:18:18.482 "qid": 0, 00:18:18.482 "state": "enabled", 00:18:18.482 "thread": "nvmf_tgt_poll_group_000", 00:18:18.482 "listen_address": { 00:18:18.482 "trtype": "TCP", 00:18:18.482 "adrfam": "IPv4", 00:18:18.482 "traddr": "10.0.0.2", 00:18:18.482 "trsvcid": "4420" 00:18:18.482 }, 00:18:18.482 "peer_address": { 00:18:18.482 "trtype": "TCP", 00:18:18.482 "adrfam": "IPv4", 00:18:18.482 "traddr": "10.0.0.1", 00:18:18.482 "trsvcid": "47584" 00:18:18.482 }, 00:18:18.482 "auth": { 00:18:18.482 "state": "completed", 00:18:18.482 "digest": "sha512", 00:18:18.482 "dhgroup": "ffdhe3072" 00:18:18.482 } 00:18:18.482 } 00:18:18.482 ]' 00:18:18.482 07:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.482 07:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.482 07:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.740 07:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:18.740 07:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.740 07:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.740 07:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.740 07:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.999 07:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YzBiZGExZjI5NGM1MTlhNGI2NDY1NWJiOGNiODI4YjBlNjk3Y2UzNzgyMDNhOTI0ZifQMg==: --dhchap-ctrl-secret DHHC-1:01:NzM5OTQwNjcwOWYzZTQxMDU2ZTg1YTNjYzU3NTM5YWQCMAyS: 00:18:19.567 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.567 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:19.567 07:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.567 07:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.567 07:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.567 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.567 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:19.567 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:19.567 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:19.567 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.567 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:19.567 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:19.567 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:19.567 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.567 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:19.567 07:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.567 07:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.567 07:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.567 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.567 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.825 00:18:19.825 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.825 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.825 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.082 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.082 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.082 07:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.082 07:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.082 07:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.082 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.082 { 00:18:20.082 "cntlid": 119, 00:18:20.082 "qid": 0, 00:18:20.082 "state": "enabled", 00:18:20.082 "thread": "nvmf_tgt_poll_group_000", 00:18:20.082 "listen_address": { 00:18:20.082 "trtype": "TCP", 00:18:20.082 "adrfam": "IPv4", 00:18:20.082 "traddr": "10.0.0.2", 00:18:20.082 "trsvcid": "4420" 00:18:20.082 }, 00:18:20.082 "peer_address": { 00:18:20.082 "trtype": "TCP", 00:18:20.082 "adrfam": "IPv4", 00:18:20.082 "traddr": "10.0.0.1", 00:18:20.082 "trsvcid": "47604" 00:18:20.082 }, 00:18:20.082 "auth": { 00:18:20.082 "state": "completed", 00:18:20.082 "digest": "sha512", 00:18:20.082 "dhgroup": "ffdhe3072" 00:18:20.082 } 00:18:20.082 } 00:18:20.082 ]' 00:18:20.082 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.082 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.082 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.082 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:20.082 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.339 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.339 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.340 07:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.340 07:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVkYmViZGM2Y2JjZDhiMTNlOTYwOGY0ODYzMzI1Zjg4OTY2YzU5M2EwODhmNmQ4Yzk5Mzk2NDhiMmNlYWJiZemuwD4=: 00:18:20.904 07:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.904 07:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:20.904 07:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.904 07:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.904 07:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.904 07:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:20.904 07:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.904 07:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:20.904 07:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:21.162 07:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:21.162 07:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.162 07:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:21.162 07:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:21.162 07:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:21.162 07:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.162 07:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.162 07:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.162 07:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.162 07:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.162 07:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.162 07:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.420 00:18:21.420 07:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.420 07:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.420 07:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.678 07:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.678 07:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.678 07:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.678 07:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.678 07:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.678 07:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.678 { 00:18:21.678 "cntlid": 121, 00:18:21.678 "qid": 0, 00:18:21.678 "state": "enabled", 00:18:21.678 "thread": "nvmf_tgt_poll_group_000", 00:18:21.678 "listen_address": { 00:18:21.678 "trtype": "TCP", 00:18:21.678 "adrfam": "IPv4", 00:18:21.678 "traddr": "10.0.0.2", 00:18:21.678 "trsvcid": "4420" 00:18:21.678 }, 00:18:21.678 "peer_address": { 00:18:21.678 "trtype": "TCP", 00:18:21.678 "adrfam": "IPv4", 00:18:21.678 "traddr": "10.0.0.1", 00:18:21.678 "trsvcid": "47614" 00:18:21.678 }, 00:18:21.678 "auth": { 00:18:21.678 "state": "completed", 00:18:21.678 "digest": "sha512", 00:18:21.678 "dhgroup": "ffdhe4096" 00:18:21.678 } 00:18:21.678 } 00:18:21.678 ]' 00:18:21.678 07:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.678 07:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.678 07:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.678 07:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:21.678 07:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.678 07:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.678 07:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.678 07:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.940 07:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjVkNTRkNTY3Y2VlZmM2Njg4YTJhNzNkOTI5NTA1NDc2NjgwMDU4NDkyNmNjMTg3aQYUig==: --dhchap-ctrl-secret DHHC-1:03:ZjNmNzNjMWMwYjQzMzQ5ZWNmZDIyODA2ZThmYjRiNmM3ZmM5NjhkNGRmMDQ5ODljYTYzMmU2YWMwZTk2MWIyN6QHuhc=: 00:18:22.572 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.572 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.572 07:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.572 07:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.572 07:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.572 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.573 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:22.573 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:22.573 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:22.573 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.573 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:22.573 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:22.573 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:22.573 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.573 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.573 07:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.573 07:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.830 07:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.830 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.830 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.830 00:18:23.088 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.088 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.088 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.088 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.088 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.088 07:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.088 07:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.088 07:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.088 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.088 { 00:18:23.088 "cntlid": 123, 00:18:23.088 "qid": 0, 00:18:23.088 "state": "enabled", 00:18:23.088 "thread": "nvmf_tgt_poll_group_000", 00:18:23.088 "listen_address": { 00:18:23.088 "trtype": "TCP", 00:18:23.088 "adrfam": "IPv4", 00:18:23.088 "traddr": "10.0.0.2", 00:18:23.088 "trsvcid": "4420" 00:18:23.088 }, 00:18:23.088 "peer_address": { 00:18:23.088 "trtype": "TCP", 00:18:23.088 "adrfam": "IPv4", 00:18:23.088 "traddr": "10.0.0.1", 00:18:23.088 "trsvcid": "57260" 00:18:23.088 }, 00:18:23.088 "auth": { 00:18:23.088 "state": "completed", 00:18:23.088 "digest": "sha512", 00:18:23.088 "dhgroup": "ffdhe4096" 00:18:23.088 } 00:18:23.088 } 00:18:23.088 ]' 00:18:23.088 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.346 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.346 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.346 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:23.346 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.346 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.346 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.346 07:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.604 07:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ODMyY2E1YjQ3OTkzYTk3NmVmMmRmNTAyYmQ0OWY1MTY0Meom: --dhchap-ctrl-secret DHHC-1:02:OTEzMzJiZWRjZjkxYmU5OTQxODQxMzg3MGYyYmI2MTQ4MGMxYzM2MTQ2MzVkZjAwwdfxQQ==: 00:18:24.170 07:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.170 07:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:24.170 07:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.170 07:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.170 07:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.170 07:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.170 07:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:24.170 07:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:24.170 07:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:24.170 07:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.170 07:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:24.170 07:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:24.170 07:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:24.170 07:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.170 07:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.170 07:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.170 07:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.170 07:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.170 07:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.170 07:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.427 00:18:24.427 07:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.427 07:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.427 07:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.685 07:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.686 07:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.686 07:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.686 07:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.686 07:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.686 07:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.686 { 00:18:24.686 "cntlid": 125, 00:18:24.686 "qid": 0, 00:18:24.686 "state": "enabled", 00:18:24.686 "thread": "nvmf_tgt_poll_group_000", 00:18:24.686 "listen_address": { 00:18:24.686 "trtype": "TCP", 00:18:24.686 "adrfam": "IPv4", 00:18:24.686 "traddr": "10.0.0.2", 00:18:24.686 "trsvcid": "4420" 00:18:24.686 }, 00:18:24.686 "peer_address": { 00:18:24.686 "trtype": "TCP", 00:18:24.686 "adrfam": "IPv4", 00:18:24.686 "traddr": "10.0.0.1", 00:18:24.686 "trsvcid": "57282" 00:18:24.686 }, 00:18:24.686 "auth": { 00:18:24.686 "state": "completed", 00:18:24.686 "digest": "sha512", 00:18:24.686 "dhgroup": "ffdhe4096" 00:18:24.686 } 00:18:24.686 } 00:18:24.686 ]' 00:18:24.686 07:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.686 07:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.686 07:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.944 07:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:24.944 07:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.944 07:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.944 07:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.944 07:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.944 07:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YzBiZGExZjI5NGM1MTlhNGI2NDY1NWJiOGNiODI4YjBlNjk3Y2UzNzgyMDNhOTI0ZifQMg==: --dhchap-ctrl-secret DHHC-1:01:NzM5OTQwNjcwOWYzZTQxMDU2ZTg1YTNjYzU3NTM5YWQCMAyS: 00:18:25.510 07:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.768 07:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:25.768 07:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.768 07:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.768 07:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.768 07:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.768 07:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:25.768 07:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:25.769 07:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:25.769 07:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.769 07:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:25.769 07:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:25.769 07:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:25.769 07:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.769 07:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:25.769 07:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.769 07:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.769 07:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.769 07:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.769 07:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:26.026 00:18:26.026 07:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.026 07:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.026 07:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.284 07:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.284 07:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.284 07:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.284 07:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.284 07:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.284 07:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.284 { 00:18:26.284 "cntlid": 127, 00:18:26.284 "qid": 0, 00:18:26.284 "state": "enabled", 00:18:26.284 "thread": "nvmf_tgt_poll_group_000", 00:18:26.284 "listen_address": { 00:18:26.284 "trtype": "TCP", 00:18:26.284 "adrfam": "IPv4", 00:18:26.284 "traddr": "10.0.0.2", 00:18:26.284 "trsvcid": "4420" 00:18:26.284 }, 00:18:26.284 "peer_address": { 00:18:26.284 "trtype": "TCP", 00:18:26.284 "adrfam": "IPv4", 00:18:26.284 "traddr": "10.0.0.1", 00:18:26.284 "trsvcid": "57312" 00:18:26.284 }, 00:18:26.284 "auth": { 00:18:26.284 "state": "completed", 00:18:26.284 "digest": "sha512", 00:18:26.284 "dhgroup": "ffdhe4096" 00:18:26.284 } 00:18:26.284 } 00:18:26.284 ]' 00:18:26.284 07:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.284 07:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.284 07:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.284 07:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:26.284 07:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.542 07:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.542 07:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.543 07:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.543 07:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVkYmViZGM2Y2JjZDhiMTNlOTYwOGY0ODYzMzI1Zjg4OTY2YzU5M2EwODhmNmQ4Yzk5Mzk2NDhiMmNlYWJiZemuwD4=: 00:18:27.110 07:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.110 07:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:27.110 07:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.110 07:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.110 07:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.110 07:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:27.110 07:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.110 07:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:27.110 07:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:27.369 07:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:27.369 07:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.369 07:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:27.369 07:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:27.369 07:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:27.369 07:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.369 07:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.369 07:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.369 07:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.369 07:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.369 07:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.369 07:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.627 00:18:27.627 07:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.627 07:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.627 07:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.885 07:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.885 07:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.885 07:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.885 07:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.885 07:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.885 07:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.885 { 00:18:27.885 "cntlid": 129, 00:18:27.885 "qid": 0, 00:18:27.885 "state": "enabled", 00:18:27.885 "thread": "nvmf_tgt_poll_group_000", 00:18:27.885 "listen_address": { 00:18:27.885 "trtype": "TCP", 00:18:27.885 "adrfam": "IPv4", 00:18:27.885 "traddr": "10.0.0.2", 00:18:27.885 "trsvcid": "4420" 00:18:27.885 }, 00:18:27.885 "peer_address": { 00:18:27.885 "trtype": "TCP", 00:18:27.885 "adrfam": "IPv4", 00:18:27.885 "traddr": "10.0.0.1", 00:18:27.885 "trsvcid": "57344" 00:18:27.885 }, 00:18:27.885 "auth": { 00:18:27.885 "state": "completed", 00:18:27.885 "digest": "sha512", 00:18:27.885 "dhgroup": "ffdhe6144" 00:18:27.885 } 00:18:27.885 } 00:18:27.885 ]' 00:18:27.885 07:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.885 07:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.885 07:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.143 07:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:28.143 07:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.143 07:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.143 07:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.143 07:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.402 07:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjVkNTRkNTY3Y2VlZmM2Njg4YTJhNzNkOTI5NTA1NDc2NjgwMDU4NDkyNmNjMTg3aQYUig==: --dhchap-ctrl-secret DHHC-1:03:ZjNmNzNjMWMwYjQzMzQ5ZWNmZDIyODA2ZThmYjRiNmM3ZmM5NjhkNGRmMDQ5ODljYTYzMmU2YWMwZTk2MWIyN6QHuhc=: 00:18:28.966 07:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.966 07:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:28.966 07:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.966 07:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.966 07:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.966 07:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.966 07:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:28.966 07:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:28.966 07:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:28.967 07:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.967 07:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:28.967 07:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:28.967 07:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:28.967 07:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.967 07:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.967 07:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.967 07:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.967 07:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.967 07:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.967 07:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.532 00:18:29.532 07:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.532 07:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.532 07:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.532 07:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.532 07:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.532 07:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.532 07:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.532 07:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.532 07:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.532 { 00:18:29.532 "cntlid": 131, 00:18:29.532 "qid": 0, 00:18:29.532 "state": "enabled", 00:18:29.532 "thread": "nvmf_tgt_poll_group_000", 00:18:29.532 "listen_address": { 00:18:29.532 "trtype": "TCP", 00:18:29.532 "adrfam": "IPv4", 00:18:29.532 "traddr": "10.0.0.2", 00:18:29.532 "trsvcid": "4420" 00:18:29.532 }, 00:18:29.532 "peer_address": { 00:18:29.532 "trtype": "TCP", 00:18:29.532 "adrfam": "IPv4", 00:18:29.532 "traddr": "10.0.0.1", 00:18:29.532 "trsvcid": "57364" 00:18:29.532 }, 00:18:29.532 "auth": { 00:18:29.532 "state": "completed", 00:18:29.532 "digest": "sha512", 00:18:29.532 "dhgroup": "ffdhe6144" 00:18:29.532 } 00:18:29.532 } 00:18:29.532 ]' 00:18:29.532 07:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.532 07:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:29.532 07:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.791 07:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:29.791 07:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.791 07:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.791 07:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.791 07:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.791 07:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ODMyY2E1YjQ3OTkzYTk3NmVmMmRmNTAyYmQ0OWY1MTY0Meom: --dhchap-ctrl-secret DHHC-1:02:OTEzMzJiZWRjZjkxYmU5OTQxODQxMzg3MGYyYmI2MTQ4MGMxYzM2MTQ2MzVkZjAwwdfxQQ==: 00:18:30.358 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.358 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:30.358 07:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.358 07:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.617 07:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.617 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.617 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:30.617 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:30.617 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:30.617 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.617 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:30.617 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:30.617 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:30.617 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.617 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.617 07:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.617 07:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.617 07:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.617 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.617 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.185 00:18:31.185 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.185 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.185 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.185 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.185 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.185 07:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.185 07:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.185 07:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.185 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.185 { 00:18:31.185 "cntlid": 133, 00:18:31.185 "qid": 0, 00:18:31.185 "state": "enabled", 00:18:31.185 "thread": "nvmf_tgt_poll_group_000", 00:18:31.185 "listen_address": { 00:18:31.185 "trtype": "TCP", 00:18:31.185 "adrfam": "IPv4", 00:18:31.185 "traddr": "10.0.0.2", 00:18:31.185 "trsvcid": "4420" 00:18:31.185 }, 00:18:31.185 "peer_address": { 00:18:31.185 "trtype": "TCP", 00:18:31.185 "adrfam": "IPv4", 00:18:31.185 "traddr": "10.0.0.1", 00:18:31.185 "trsvcid": "57382" 00:18:31.185 }, 00:18:31.185 "auth": { 00:18:31.185 "state": "completed", 00:18:31.185 "digest": "sha512", 00:18:31.185 "dhgroup": "ffdhe6144" 00:18:31.185 } 00:18:31.185 } 00:18:31.185 ]' 00:18:31.185 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.185 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.185 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.185 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:31.185 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.445 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.445 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.445 07:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.445 07:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YzBiZGExZjI5NGM1MTlhNGI2NDY1NWJiOGNiODI4YjBlNjk3Y2UzNzgyMDNhOTI0ZifQMg==: --dhchap-ctrl-secret DHHC-1:01:NzM5OTQwNjcwOWYzZTQxMDU2ZTg1YTNjYzU3NTM5YWQCMAyS: 00:18:32.012 07:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.012 07:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:32.012 07:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.012 07:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.012 07:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.013 07:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.013 07:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:32.013 07:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:32.272 07:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:32.272 07:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.272 07:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:32.272 07:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:32.272 07:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:32.272 07:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.272 07:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:32.272 07:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.272 07:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.272 07:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.272 07:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:32.272 07:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:32.530 00:18:32.530 07:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.530 07:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.530 07:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.789 07:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.789 07:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.789 07:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.789 07:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.789 07:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.789 07:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.789 { 00:18:32.789 "cntlid": 135, 00:18:32.789 "qid": 0, 00:18:32.789 "state": "enabled", 00:18:32.789 "thread": "nvmf_tgt_poll_group_000", 00:18:32.789 "listen_address": { 00:18:32.789 "trtype": "TCP", 00:18:32.789 "adrfam": "IPv4", 00:18:32.789 "traddr": "10.0.0.2", 00:18:32.789 "trsvcid": "4420" 00:18:32.789 }, 00:18:32.789 "peer_address": { 00:18:32.789 "trtype": "TCP", 00:18:32.789 "adrfam": "IPv4", 00:18:32.789 "traddr": "10.0.0.1", 00:18:32.789 "trsvcid": "53958" 00:18:32.789 }, 00:18:32.789 "auth": { 00:18:32.789 "state": "completed", 00:18:32.789 "digest": "sha512", 00:18:32.789 "dhgroup": "ffdhe6144" 00:18:32.789 } 00:18:32.789 } 00:18:32.789 ]' 00:18:32.789 07:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.789 07:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.789 07:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.789 07:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:32.789 07:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.047 07:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.047 07:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.048 07:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.048 07:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVkYmViZGM2Y2JjZDhiMTNlOTYwOGY0ODYzMzI1Zjg4OTY2YzU5M2EwODhmNmQ4Yzk5Mzk2NDhiMmNlYWJiZemuwD4=: 00:18:33.615 07:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.615 07:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:33.615 07:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.615 07:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.615 07:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.615 07:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.615 07:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.615 07:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:33.615 07:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:33.873 07:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:33.873 07:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.873 07:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:33.873 07:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:33.873 07:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:33.873 07:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.873 07:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.873 07:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.873 07:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.873 07:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.873 07:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.874 07:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.441 00:18:34.441 07:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.441 07:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.441 07:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.441 07:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.441 07:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.441 07:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.441 07:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.700 07:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.700 07:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.700 { 00:18:34.700 "cntlid": 137, 00:18:34.700 "qid": 0, 00:18:34.701 "state": "enabled", 00:18:34.701 "thread": "nvmf_tgt_poll_group_000", 00:18:34.701 "listen_address": { 00:18:34.701 "trtype": "TCP", 00:18:34.701 "adrfam": "IPv4", 00:18:34.701 "traddr": "10.0.0.2", 00:18:34.701 "trsvcid": "4420" 00:18:34.701 }, 00:18:34.701 "peer_address": { 00:18:34.701 "trtype": "TCP", 00:18:34.701 "adrfam": "IPv4", 00:18:34.701 "traddr": "10.0.0.1", 00:18:34.701 "trsvcid": "53996" 00:18:34.701 }, 00:18:34.701 "auth": { 00:18:34.701 "state": "completed", 00:18:34.701 "digest": "sha512", 00:18:34.701 "dhgroup": "ffdhe8192" 00:18:34.701 } 00:18:34.701 } 00:18:34.701 ]' 00:18:34.701 07:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.701 07:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:34.701 07:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.701 07:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:34.701 07:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.701 07:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.701 07:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.701 07:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.961 07:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjVkNTRkNTY3Y2VlZmM2Njg4YTJhNzNkOTI5NTA1NDc2NjgwMDU4NDkyNmNjMTg3aQYUig==: --dhchap-ctrl-secret DHHC-1:03:ZjNmNzNjMWMwYjQzMzQ5ZWNmZDIyODA2ZThmYjRiNmM3ZmM5NjhkNGRmMDQ5ODljYTYzMmU2YWMwZTk2MWIyN6QHuhc=: 00:18:35.530 07:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.530 07:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:35.530 07:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.530 07:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.530 07:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.530 07:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.530 07:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:35.530 07:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:35.789 07:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:35.789 07:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.789 07:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:35.789 07:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:35.789 07:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:35.789 07:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.789 07:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.789 07:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.789 07:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.789 07:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.789 07:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.789 07:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.047 00:18:36.047 07:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.047 07:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.048 07:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.306 07:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.306 07:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.306 07:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.306 07:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.306 07:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.306 07:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.306 { 00:18:36.306 "cntlid": 139, 00:18:36.306 "qid": 0, 00:18:36.306 "state": "enabled", 00:18:36.306 "thread": "nvmf_tgt_poll_group_000", 00:18:36.306 "listen_address": { 00:18:36.306 "trtype": "TCP", 00:18:36.306 "adrfam": "IPv4", 00:18:36.306 "traddr": "10.0.0.2", 00:18:36.306 "trsvcid": "4420" 00:18:36.306 }, 00:18:36.306 "peer_address": { 00:18:36.306 "trtype": "TCP", 00:18:36.306 "adrfam": "IPv4", 00:18:36.306 "traddr": "10.0.0.1", 00:18:36.306 "trsvcid": "54018" 00:18:36.306 }, 00:18:36.306 "auth": { 00:18:36.306 "state": "completed", 00:18:36.306 "digest": "sha512", 00:18:36.306 "dhgroup": "ffdhe8192" 00:18:36.306 } 00:18:36.306 } 00:18:36.306 ]' 00:18:36.306 07:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.306 07:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.306 07:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.565 07:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:36.565 07:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.565 07:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.565 07:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.565 07:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.565 07:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ODMyY2E1YjQ3OTkzYTk3NmVmMmRmNTAyYmQ0OWY1MTY0Meom: --dhchap-ctrl-secret DHHC-1:02:OTEzMzJiZWRjZjkxYmU5OTQxODQxMzg3MGYyYmI2MTQ4MGMxYzM2MTQ2MzVkZjAwwdfxQQ==: 00:18:37.130 07:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.388 07:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:37.388 07:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.388 07:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.388 07:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.388 07:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.388 07:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:37.388 07:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:37.388 07:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:37.388 07:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.388 07:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:37.388 07:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:37.388 07:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:37.388 07:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.388 07:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.388 07:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.388 07:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.388 07:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.388 07:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.388 07:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.952 00:18:37.952 07:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.952 07:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.952 07:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.210 07:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.210 07:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.210 07:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.210 07:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.210 07:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.210 07:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.210 { 00:18:38.210 "cntlid": 141, 00:18:38.210 "qid": 0, 00:18:38.210 "state": "enabled", 00:18:38.210 "thread": "nvmf_tgt_poll_group_000", 00:18:38.210 "listen_address": { 00:18:38.210 "trtype": "TCP", 00:18:38.210 "adrfam": "IPv4", 00:18:38.210 "traddr": "10.0.0.2", 00:18:38.210 "trsvcid": "4420" 00:18:38.210 }, 00:18:38.210 "peer_address": { 00:18:38.210 "trtype": "TCP", 00:18:38.210 "adrfam": "IPv4", 00:18:38.210 "traddr": "10.0.0.1", 00:18:38.210 "trsvcid": "54050" 00:18:38.210 }, 00:18:38.210 "auth": { 00:18:38.210 "state": "completed", 00:18:38.210 "digest": "sha512", 00:18:38.210 "dhgroup": "ffdhe8192" 00:18:38.210 } 00:18:38.210 } 00:18:38.210 ]' 00:18:38.210 07:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.210 07:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.211 07:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.211 07:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:38.211 07:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.211 07:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.211 07:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.211 07:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.468 07:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YzBiZGExZjI5NGM1MTlhNGI2NDY1NWJiOGNiODI4YjBlNjk3Y2UzNzgyMDNhOTI0ZifQMg==: --dhchap-ctrl-secret DHHC-1:01:NzM5OTQwNjcwOWYzZTQxMDU2ZTg1YTNjYzU3NTM5YWQCMAyS: 00:18:39.102 07:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.102 07:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:39.102 07:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.102 07:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.102 07:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.102 07:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.102 07:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:39.102 07:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:39.102 07:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:39.102 07:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.102 07:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:39.102 07:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:39.102 07:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:39.102 07:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.102 07:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:39.102 07:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.102 07:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.102 07:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.102 07:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:39.102 07:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:39.665 00:18:39.665 07:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.665 07:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.665 07:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.923 07:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.923 07:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.923 07:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.923 07:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.923 07:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.923 07:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.923 { 00:18:39.923 "cntlid": 143, 00:18:39.923 "qid": 0, 00:18:39.923 "state": "enabled", 00:18:39.923 "thread": "nvmf_tgt_poll_group_000", 00:18:39.923 "listen_address": { 00:18:39.923 "trtype": "TCP", 00:18:39.923 "adrfam": "IPv4", 00:18:39.923 "traddr": "10.0.0.2", 00:18:39.923 "trsvcid": "4420" 00:18:39.923 }, 00:18:39.923 "peer_address": { 00:18:39.923 "trtype": "TCP", 00:18:39.923 "adrfam": "IPv4", 00:18:39.923 "traddr": "10.0.0.1", 00:18:39.923 "trsvcid": "54096" 00:18:39.923 }, 00:18:39.923 "auth": { 00:18:39.923 "state": "completed", 00:18:39.923 "digest": "sha512", 00:18:39.923 "dhgroup": "ffdhe8192" 00:18:39.923 } 00:18:39.923 } 00:18:39.923 ]' 00:18:39.923 07:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.923 07:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:39.923 07:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.923 07:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:39.923 07:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.923 07:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.923 07:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.923 07:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.181 07:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVkYmViZGM2Y2JjZDhiMTNlOTYwOGY0ODYzMzI1Zjg4OTY2YzU5M2EwODhmNmQ4Yzk5Mzk2NDhiMmNlYWJiZemuwD4=: 00:18:40.747 07:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.747 07:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:40.747 07:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.747 07:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.747 07:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.747 07:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:40.747 07:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:40.747 07:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:40.747 07:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:40.747 07:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:40.747 07:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:41.005 07:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:41.005 07:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.005 07:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:41.005 07:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:41.005 07:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:41.005 07:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.005 07:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.005 07:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.005 07:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.005 07:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.005 07:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.005 07:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.571 00:18:41.571 07:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.571 07:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.571 07:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.571 07:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.571 07:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.571 07:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.571 07:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.571 07:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.571 07:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.571 { 00:18:41.571 "cntlid": 145, 00:18:41.571 "qid": 0, 00:18:41.571 "state": "enabled", 00:18:41.571 "thread": "nvmf_tgt_poll_group_000", 00:18:41.571 "listen_address": { 00:18:41.571 "trtype": "TCP", 00:18:41.571 "adrfam": "IPv4", 00:18:41.571 "traddr": "10.0.0.2", 00:18:41.571 "trsvcid": "4420" 00:18:41.571 }, 00:18:41.571 "peer_address": { 00:18:41.571 "trtype": "TCP", 00:18:41.571 "adrfam": "IPv4", 00:18:41.571 "traddr": "10.0.0.1", 00:18:41.571 "trsvcid": "54134" 00:18:41.571 }, 00:18:41.571 "auth": { 00:18:41.571 "state": "completed", 00:18:41.571 "digest": "sha512", 00:18:41.571 "dhgroup": "ffdhe8192" 00:18:41.571 } 00:18:41.571 } 00:18:41.571 ]' 00:18:41.571 07:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.571 07:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:41.571 07:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.829 07:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:41.829 07:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.829 07:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.829 07:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.830 07:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.830 07:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjVkNTRkNTY3Y2VlZmM2Njg4YTJhNzNkOTI5NTA1NDc2NjgwMDU4NDkyNmNjMTg3aQYUig==: --dhchap-ctrl-secret DHHC-1:03:ZjNmNzNjMWMwYjQzMzQ5ZWNmZDIyODA2ZThmYjRiNmM3ZmM5NjhkNGRmMDQ5ODljYTYzMmU2YWMwZTk2MWIyN6QHuhc=: 00:18:42.396 07:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.396 07:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:42.396 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.396 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.655 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.655 07:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:42.655 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.655 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.655 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.655 07:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:42.655 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:42.655 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:42.655 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:42.655 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:42.655 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:42.655 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:42.655 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:42.655 07:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:42.914 request: 00:18:42.914 { 00:18:42.914 "name": "nvme0", 00:18:42.914 "trtype": "tcp", 00:18:42.914 "traddr": "10.0.0.2", 00:18:42.914 "adrfam": "ipv4", 00:18:42.914 "trsvcid": "4420", 00:18:42.914 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:42.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:42.914 "prchk_reftag": false, 00:18:42.914 "prchk_guard": false, 00:18:42.914 "hdgst": false, 00:18:42.914 "ddgst": false, 00:18:42.914 "dhchap_key": "key2", 00:18:42.914 "method": "bdev_nvme_attach_controller", 00:18:42.914 "req_id": 1 00:18:42.914 } 00:18:42.914 Got JSON-RPC error response 00:18:42.914 response: 00:18:42.914 { 00:18:42.914 "code": -5, 00:18:42.914 "message": "Input/output error" 00:18:42.914 } 00:18:42.914 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:42.914 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:42.914 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:42.914 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:42.914 07:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:42.914 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.914 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.914 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.914 07:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.914 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.914 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.914 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.914 07:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:42.914 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:42.914 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:42.914 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:42.914 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:42.914 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:42.914 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:42.914 07:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:42.914 07:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:43.482 request: 00:18:43.482 { 00:18:43.482 "name": "nvme0", 00:18:43.482 "trtype": "tcp", 00:18:43.482 "traddr": "10.0.0.2", 00:18:43.482 "adrfam": "ipv4", 00:18:43.482 "trsvcid": "4420", 00:18:43.482 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:43.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:43.482 "prchk_reftag": false, 00:18:43.482 "prchk_guard": false, 00:18:43.482 "hdgst": false, 00:18:43.482 "ddgst": false, 00:18:43.482 "dhchap_key": "key1", 00:18:43.482 "dhchap_ctrlr_key": "ckey2", 00:18:43.482 "method": "bdev_nvme_attach_controller", 00:18:43.482 "req_id": 1 00:18:43.482 } 00:18:43.482 Got JSON-RPC error response 00:18:43.482 response: 00:18:43.482 { 00:18:43.482 "code": -5, 00:18:43.482 "message": "Input/output error" 00:18:43.482 } 00:18:43.483 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:43.483 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:43.483 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:43.483 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:43.483 07:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:43.483 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.483 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.483 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.483 07:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:43.483 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.483 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.483 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.483 07:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.483 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:43.483 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.483 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:43.483 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:43.483 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:43.483 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:43.483 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.483 07:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.742 request: 00:18:43.742 { 00:18:43.742 "name": "nvme0", 00:18:43.742 "trtype": "tcp", 00:18:43.742 "traddr": "10.0.0.2", 00:18:43.742 "adrfam": "ipv4", 00:18:43.742 "trsvcid": "4420", 00:18:43.742 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:43.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:43.742 "prchk_reftag": false, 00:18:43.742 "prchk_guard": false, 00:18:43.742 "hdgst": false, 00:18:43.742 "ddgst": false, 00:18:43.742 "dhchap_key": "key1", 00:18:43.742 "dhchap_ctrlr_key": "ckey1", 00:18:43.742 "method": "bdev_nvme_attach_controller", 00:18:43.742 "req_id": 1 00:18:43.742 } 00:18:43.742 Got JSON-RPC error response 00:18:43.742 response: 00:18:43.742 { 00:18:43.742 "code": -5, 00:18:43.742 "message": "Input/output error" 00:18:43.742 } 00:18:44.001 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:44.001 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:44.001 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:44.001 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:44.001 07:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:44.001 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.001 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.001 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.001 07:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3249510 00:18:44.001 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3249510 ']' 00:18:44.001 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3249510 00:18:44.001 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:44.001 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:44.001 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3249510 00:18:44.001 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:44.001 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:44.001 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3249510' 00:18:44.001 killing process with pid 3249510 00:18:44.001 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3249510 00:18:44.002 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3249510 00:18:44.002 07:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:44.002 07:54:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:44.002 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:44.002 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.002 07:54:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3270843 00:18:44.002 07:54:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:44.002 07:54:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3270843 00:18:44.002 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3270843 ']' 00:18:44.002 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.002 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:44.002 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.002 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:44.002 07:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.938 07:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:44.938 07:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:44.938 07:54:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:44.938 07:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:44.938 07:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.938 07:54:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.938 07:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:44.938 07:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3270843 00:18:44.938 07:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3270843 ']' 00:18:44.938 07:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.938 07:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:44.938 07:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.938 07:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:44.938 07:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.196 07:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:45.196 07:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:45.196 07:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:45.196 07:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.196 07:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.196 07:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.196 07:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:45.196 07:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.196 07:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:45.196 07:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:45.196 07:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:45.196 07:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.196 07:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:45.196 07:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.196 07:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.454 07:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.454 07:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.454 07:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.712 00:18:45.712 07:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.712 07:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.712 07:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.970 07:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.970 07:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.970 07:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.970 07:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.970 07:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.970 07:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.970 { 00:18:45.970 "cntlid": 1, 00:18:45.970 "qid": 0, 00:18:45.970 "state": "enabled", 00:18:45.970 "thread": "nvmf_tgt_poll_group_000", 00:18:45.970 "listen_address": { 00:18:45.970 "trtype": "TCP", 00:18:45.970 "adrfam": "IPv4", 00:18:45.970 "traddr": "10.0.0.2", 00:18:45.970 "trsvcid": "4420" 00:18:45.970 }, 00:18:45.970 "peer_address": { 00:18:45.970 "trtype": "TCP", 00:18:45.970 "adrfam": "IPv4", 00:18:45.970 "traddr": "10.0.0.1", 00:18:45.970 "trsvcid": "45176" 00:18:45.970 }, 00:18:45.970 "auth": { 00:18:45.970 "state": "completed", 00:18:45.970 "digest": "sha512", 00:18:45.970 "dhgroup": "ffdhe8192" 00:18:45.970 } 00:18:45.970 } 00:18:45.970 ]' 00:18:45.970 07:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.970 07:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.970 07:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.970 07:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:45.970 07:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.228 07:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.228 07:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.228 07:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.228 07:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzVkYmViZGM2Y2JjZDhiMTNlOTYwOGY0ODYzMzI1Zjg4OTY2YzU5M2EwODhmNmQ4Yzk5Mzk2NDhiMmNlYWJiZemuwD4=: 00:18:46.795 07:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.795 07:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:46.795 07:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.795 07:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.795 07:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.795 07:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:46.795 07:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.795 07:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.795 07:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.795 07:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:46.795 07:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:47.053 07:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.053 07:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:47.053 07:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.053 07:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:47.053 07:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:47.053 07:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:47.053 07:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:47.053 07:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.053 07:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.312 request: 00:18:47.312 { 00:18:47.312 "name": "nvme0", 00:18:47.312 "trtype": "tcp", 00:18:47.312 "traddr": "10.0.0.2", 00:18:47.312 "adrfam": "ipv4", 00:18:47.312 "trsvcid": "4420", 00:18:47.312 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:47.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:47.312 "prchk_reftag": false, 00:18:47.312 "prchk_guard": false, 00:18:47.312 "hdgst": false, 00:18:47.312 "ddgst": false, 00:18:47.312 "dhchap_key": "key3", 00:18:47.312 "method": "bdev_nvme_attach_controller", 00:18:47.312 "req_id": 1 00:18:47.312 } 00:18:47.312 Got JSON-RPC error response 00:18:47.312 response: 00:18:47.312 { 00:18:47.312 "code": -5, 00:18:47.312 "message": "Input/output error" 00:18:47.312 } 00:18:47.312 07:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:47.312 07:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:47.312 07:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:47.312 07:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:47.312 07:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:47.312 07:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:47.312 07:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:47.312 07:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:47.569 07:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.569 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:47.570 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.570 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:47.570 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:47.570 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:47.570 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:47.570 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.570 07:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.570 request: 00:18:47.570 { 00:18:47.570 "name": "nvme0", 00:18:47.570 "trtype": "tcp", 00:18:47.570 "traddr": "10.0.0.2", 00:18:47.570 "adrfam": "ipv4", 00:18:47.570 "trsvcid": "4420", 00:18:47.570 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:47.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:47.570 "prchk_reftag": false, 00:18:47.570 "prchk_guard": false, 00:18:47.570 "hdgst": false, 00:18:47.570 "ddgst": false, 00:18:47.570 "dhchap_key": "key3", 00:18:47.570 "method": "bdev_nvme_attach_controller", 00:18:47.570 "req_id": 1 00:18:47.570 } 00:18:47.570 Got JSON-RPC error response 00:18:47.570 response: 00:18:47.570 { 00:18:47.570 "code": -5, 00:18:47.570 "message": "Input/output error" 00:18:47.570 } 00:18:47.570 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:47.570 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:47.570 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:47.570 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:47.570 07:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:47.570 07:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:47.570 07:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:47.570 07:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:47.570 07:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:47.570 07:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:47.829 07:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:47.829 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.829 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.829 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.829 07:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:47.829 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.829 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.829 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.829 07:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:47.829 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:47.829 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:47.829 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:47.829 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:47.829 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:47.829 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:47.829 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:47.829 07:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:48.088 request: 00:18:48.088 { 00:18:48.088 "name": "nvme0", 00:18:48.088 "trtype": "tcp", 00:18:48.088 "traddr": "10.0.0.2", 00:18:48.088 "adrfam": "ipv4", 00:18:48.088 "trsvcid": "4420", 00:18:48.088 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:48.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:48.088 "prchk_reftag": false, 00:18:48.088 "prchk_guard": false, 00:18:48.088 "hdgst": false, 00:18:48.088 "ddgst": false, 00:18:48.088 "dhchap_key": "key0", 00:18:48.088 "dhchap_ctrlr_key": "key1", 00:18:48.088 "method": "bdev_nvme_attach_controller", 00:18:48.088 "req_id": 1 00:18:48.088 } 00:18:48.088 Got JSON-RPC error response 00:18:48.088 response: 00:18:48.088 { 00:18:48.088 "code": -5, 00:18:48.088 "message": "Input/output error" 00:18:48.088 } 00:18:48.088 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:48.088 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:48.088 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:48.088 07:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:48.088 07:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:48.088 07:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:48.347 00:18:48.347 07:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:48.347 07:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:48.347 07:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.347 07:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.347 07:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.347 07:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.608 07:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:48.608 07:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:48.608 07:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3249639 00:18:48.608 07:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3249639 ']' 00:18:48.608 07:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3249639 00:18:48.608 07:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:48.608 07:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:48.608 07:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3249639 00:18:48.608 07:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:48.608 07:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:48.608 07:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3249639' 00:18:48.608 killing process with pid 3249639 00:18:48.608 07:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3249639 00:18:48.608 07:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3249639 00:18:49.175 07:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:49.175 07:54:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:49.175 07:54:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:49.175 07:54:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:49.175 07:54:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:49.175 07:54:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:49.175 07:54:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:49.175 rmmod nvme_tcp 00:18:49.175 rmmod nvme_fabrics 00:18:49.175 rmmod nvme_keyring 00:18:49.175 07:54:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:49.175 07:54:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:49.175 07:54:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:49.175 07:54:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3270843 ']' 00:18:49.175 07:54:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3270843 00:18:49.175 07:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3270843 ']' 00:18:49.175 07:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3270843 00:18:49.176 07:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:49.176 07:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:49.176 07:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3270843 00:18:49.176 07:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:49.176 07:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:49.176 07:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3270843' 00:18:49.176 killing process with pid 3270843 00:18:49.176 07:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3270843 00:18:49.176 07:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3270843 00:18:49.434 07:54:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:49.434 07:54:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:49.434 07:54:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:49.434 07:54:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:49.434 07:54:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:49.434 07:54:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.434 07:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.434 07:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.337 07:54:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:51.337 07:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.RrX /tmp/spdk.key-sha256.qhV /tmp/spdk.key-sha384.yku /tmp/spdk.key-sha512.2o6 /tmp/spdk.key-sha512.IpK /tmp/spdk.key-sha384.EEU /tmp/spdk.key-sha256.gMg '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:51.337 00:18:51.337 real 2m13.145s 00:18:51.337 user 5m5.437s 00:18:51.337 sys 0m21.272s 00:18:51.337 07:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:51.337 07:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.337 ************************************ 00:18:51.337 END TEST nvmf_auth_target 00:18:51.337 ************************************ 00:18:51.337 07:54:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:51.337 07:54:36 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:18:51.337 07:54:36 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:51.337 07:54:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:51.337 07:54:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:51.337 07:54:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:51.337 ************************************ 00:18:51.337 START TEST nvmf_bdevio_no_huge 00:18:51.337 ************************************ 00:18:51.337 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:51.596 * Looking for test storage... 00:18:51.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.596 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:51.597 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.597 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:51.597 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:51.597 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:51.597 07:54:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:58.165 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:58.165 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:58.165 Found net devices under 0000:86:00.0: cvl_0_0 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:58.165 Found net devices under 0000:86:00.1: cvl_0_1 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:58.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:58.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:18:58.165 00:18:58.165 --- 10.0.0.2 ping statistics --- 00:18:58.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.165 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:58.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:58.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:18:58.165 00:18:58.165 --- 10.0.0.1 ping statistics --- 00:18:58.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.165 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:58.165 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:58.166 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:58.166 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.166 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3275120 00:18:58.166 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3275120 00:18:58.166 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:58.166 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 3275120 ']' 00:18:58.166 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.166 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:58.166 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.166 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:58.166 07:54:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.166 [2024-07-15 07:54:41.996919] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:18:58.166 [2024-07-15 07:54:41.996960] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:58.166 [2024-07-15 07:54:42.058131] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:58.166 [2024-07-15 07:54:42.142681] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.166 [2024-07-15 07:54:42.142719] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.166 [2024-07-15 07:54:42.142726] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.166 [2024-07-15 07:54:42.142732] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.166 [2024-07-15 07:54:42.142737] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.166 [2024-07-15 07:54:42.142847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:58.166 [2024-07-15 07:54:42.142974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:58.166 [2024-07-15 07:54:42.143067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:58.166 [2024-07-15 07:54:42.143068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.166 [2024-07-15 07:54:42.842518] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.166 Malloc0 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.166 [2024-07-15 07:54:42.886810] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:58.166 { 00:18:58.166 "params": { 00:18:58.166 "name": "Nvme$subsystem", 00:18:58.166 "trtype": "$TEST_TRANSPORT", 00:18:58.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:58.166 "adrfam": "ipv4", 00:18:58.166 "trsvcid": "$NVMF_PORT", 00:18:58.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:58.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:58.166 "hdgst": ${hdgst:-false}, 00:18:58.166 "ddgst": ${ddgst:-false} 00:18:58.166 }, 00:18:58.166 "method": "bdev_nvme_attach_controller" 00:18:58.166 } 00:18:58.166 EOF 00:18:58.166 )") 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:58.166 07:54:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:58.166 "params": { 00:18:58.166 "name": "Nvme1", 00:18:58.166 "trtype": "tcp", 00:18:58.166 "traddr": "10.0.0.2", 00:18:58.166 "adrfam": "ipv4", 00:18:58.166 "trsvcid": "4420", 00:18:58.166 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.166 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:58.166 "hdgst": false, 00:18:58.166 "ddgst": false 00:18:58.166 }, 00:18:58.166 "method": "bdev_nvme_attach_controller" 00:18:58.166 }' 00:18:58.425 [2024-07-15 07:54:42.935596] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:18:58.425 [2024-07-15 07:54:42.935641] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3275368 ] 00:18:58.425 [2024-07-15 07:54:43.007519] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:58.425 [2024-07-15 07:54:43.093949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.425 [2024-07-15 07:54:43.094056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.425 [2024-07-15 07:54:43.094056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.681 I/O targets: 00:18:58.681 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:58.681 00:18:58.681 00:18:58.681 CUnit - A unit testing framework for C - Version 2.1-3 00:18:58.681 http://cunit.sourceforge.net/ 00:18:58.681 00:18:58.681 00:18:58.681 Suite: bdevio tests on: Nvme1n1 00:18:58.681 Test: blockdev write read block ...passed 00:18:58.681 Test: blockdev write zeroes read block ...passed 00:18:58.681 Test: blockdev write zeroes read no split ...passed 00:18:58.681 Test: blockdev write zeroes read split ...passed 00:18:58.681 Test: blockdev write zeroes read split partial ...passed 00:18:58.681 Test: blockdev reset ...[2024-07-15 07:54:43.406047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:58.681 [2024-07-15 07:54:43.406106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1146300 (9): Bad file descriptor 00:18:58.938 [2024-07-15 07:54:43.555130] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:58.938 passed 00:18:58.938 Test: blockdev write read 8 blocks ...passed 00:18:58.938 Test: blockdev write read size > 128k ...passed 00:18:58.938 Test: blockdev write read invalid size ...passed 00:18:58.938 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:58.938 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:58.938 Test: blockdev write read max offset ...passed 00:18:58.938 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:59.195 Test: blockdev writev readv 8 blocks ...passed 00:18:59.195 Test: blockdev writev readv 30 x 1block ...passed 00:18:59.195 Test: blockdev writev readv block ...passed 00:18:59.195 Test: blockdev writev readv size > 128k ...passed 00:18:59.195 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:59.195 Test: blockdev comparev and writev ...[2024-07-15 07:54:43.766018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.195 [2024-07-15 07:54:43.766046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.195 [2024-07-15 07:54:43.766060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.195 [2024-07-15 07:54:43.766068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:59.195 [2024-07-15 07:54:43.766325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.196 [2024-07-15 07:54:43.766336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:59.196 [2024-07-15 07:54:43.766347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.196 [2024-07-15 07:54:43.766354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:59.196 [2024-07-15 07:54:43.766599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.196 [2024-07-15 07:54:43.766608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:59.196 [2024-07-15 07:54:43.766619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.196 [2024-07-15 07:54:43.766626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:59.196 [2024-07-15 07:54:43.766858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.196 [2024-07-15 07:54:43.766867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:59.196 [2024-07-15 07:54:43.766878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.196 [2024-07-15 07:54:43.766884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:59.196 passed 00:18:59.196 Test: blockdev nvme passthru rw ...passed 00:18:59.196 Test: blockdev nvme passthru vendor specific ...[2024-07-15 07:54:43.848527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:59.196 [2024-07-15 07:54:43.848543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:59.196 [2024-07-15 07:54:43.848654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:59.196 [2024-07-15 07:54:43.848663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:59.196 [2024-07-15 07:54:43.848768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:59.196 [2024-07-15 07:54:43.848780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:59.196 [2024-07-15 07:54:43.848884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:59.196 [2024-07-15 07:54:43.848893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:59.196 passed 00:18:59.196 Test: blockdev nvme admin passthru ...passed 00:18:59.196 Test: blockdev copy ...passed 00:18:59.196 00:18:59.196 Run Summary: Type Total Ran Passed Failed Inactive 00:18:59.196 suites 1 1 n/a 0 0 00:18:59.196 tests 23 23 23 0 0 00:18:59.196 asserts 152 152 152 0 n/a 00:18:59.196 00:18:59.196 Elapsed time = 1.353 seconds 00:18:59.453 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:59.453 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.453 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:59.453 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.453 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:59.453 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:59.453 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:59.453 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:59.453 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:59.453 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:59.453 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:59.453 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:59.453 rmmod nvme_tcp 00:18:59.453 rmmod nvme_fabrics 00:18:59.710 rmmod nvme_keyring 00:18:59.710 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:59.710 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:59.710 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:59.710 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3275120 ']' 00:18:59.710 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3275120 00:18:59.710 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 3275120 ']' 00:18:59.710 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 3275120 00:18:59.710 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:18:59.710 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:59.710 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3275120 00:18:59.710 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:59.710 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:59.710 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3275120' 00:18:59.710 killing process with pid 3275120 00:18:59.710 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 3275120 00:18:59.710 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 3275120 00:18:59.970 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:59.970 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:59.970 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:59.970 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:59.970 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:59.970 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.970 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.970 07:54:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.925 07:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:01.925 00:19:01.925 real 0m10.568s 00:19:01.925 user 0m13.333s 00:19:01.925 sys 0m5.197s 00:19:01.925 07:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:01.925 07:54:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:01.925 ************************************ 00:19:01.925 END TEST nvmf_bdevio_no_huge 00:19:01.925 ************************************ 00:19:02.184 07:54:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:02.184 07:54:46 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:02.184 07:54:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:02.184 07:54:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:02.184 07:54:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:02.184 ************************************ 00:19:02.184 START TEST nvmf_tls 00:19:02.184 ************************************ 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:02.184 * Looking for test storage... 00:19:02.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.184 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.185 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:02.185 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:02.185 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:02.185 07:54:46 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:02.185 07:54:46 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:19:02.185 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:02.185 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:02.185 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:02.185 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:02.185 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:02.185 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.185 07:54:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:02.185 07:54:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.185 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:02.185 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:02.185 07:54:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:19:02.185 07:54:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:08.758 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:08.758 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:08.758 Found net devices under 0000:86:00.0: cvl_0_0 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:08.758 Found net devices under 0000:86:00.1: cvl_0_1 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:08.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:08.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:19:08.758 00:19:08.758 --- 10.0.0.2 ping statistics --- 00:19:08.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.758 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:08.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:08.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:19:08.758 00:19:08.758 --- 10.0.0.1 ping statistics --- 00:19:08.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.758 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:08.758 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:08.759 07:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:08.759 07:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.759 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3279116 00:19:08.759 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3279116 00:19:08.759 07:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:08.759 07:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3279116 ']' 00:19:08.759 07:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.759 07:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:08.759 07:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.759 07:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:08.759 07:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.759 [2024-07-15 07:54:52.644233] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:19:08.759 [2024-07-15 07:54:52.644280] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.759 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.759 [2024-07-15 07:54:52.715403] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.759 [2024-07-15 07:54:52.786720] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.759 [2024-07-15 07:54:52.786761] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.759 [2024-07-15 07:54:52.786767] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.759 [2024-07-15 07:54:52.786773] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.759 [2024-07-15 07:54:52.786777] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.759 [2024-07-15 07:54:52.786797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.759 07:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:08.759 07:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:08.759 07:54:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:08.759 07:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:08.759 07:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.759 07:54:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.759 07:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:08.759 07:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:09.017 true 00:19:09.017 07:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:09.017 07:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:19:09.276 07:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:19:09.276 07:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:09.276 07:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:09.276 07:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:09.276 07:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:19:09.534 07:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:19:09.534 07:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:09.534 07:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:09.792 07:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:09.792 07:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:19:09.792 07:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:19:10.050 07:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:10.050 07:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:10.050 07:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:10.050 07:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:19:10.050 07:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:10.050 07:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:10.309 07:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:10.309 07:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:10.567 07:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:19:10.567 07:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:10.567 07:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:10.567 07:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:10.567 07:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.3RX0f3c7G4 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.vJNOwzqedd 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.3RX0f3c7G4 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.vJNOwzqedd 00:19:10.826 07:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:11.085 07:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:11.343 07:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.3RX0f3c7G4 00:19:11.343 07:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.3RX0f3c7G4 00:19:11.343 07:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:11.601 [2024-07-15 07:54:56.104214] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.601 07:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:11.601 07:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:11.858 [2024-07-15 07:54:56.449090] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:11.858 [2024-07-15 07:54:56.449312] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:11.858 07:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:12.116 malloc0 00:19:12.116 07:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:12.116 07:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3RX0f3c7G4 00:19:12.373 [2024-07-15 07:54:56.958582] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:12.373 07:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.3RX0f3c7G4 00:19:12.373 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.348 Initializing NVMe Controllers 00:19:22.348 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:22.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:22.348 Initialization complete. Launching workers. 00:19:22.348 ======================================================== 00:19:22.348 Latency(us) 00:19:22.348 Device Information : IOPS MiB/s Average min max 00:19:22.348 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16568.24 64.72 3863.23 799.00 6147.23 00:19:22.348 ======================================================== 00:19:22.348 Total : 16568.24 64.72 3863.23 799.00 6147.23 00:19:22.348 00:19:22.348 07:55:07 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3RX0f3c7G4 00:19:22.348 07:55:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:22.348 07:55:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:22.348 07:55:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:22.348 07:55:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3RX0f3c7G4' 00:19:22.348 07:55:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:22.348 07:55:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3281467 00:19:22.348 07:55:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:22.348 07:55:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:22.348 07:55:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3281467 /var/tmp/bdevperf.sock 00:19:22.348 07:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3281467 ']' 00:19:22.348 07:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:22.348 07:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:22.348 07:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:22.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:22.348 07:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:22.348 07:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.607 [2024-07-15 07:55:07.131016] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:19:22.607 [2024-07-15 07:55:07.131067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281467 ] 00:19:22.607 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.607 [2024-07-15 07:55:07.197931] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.607 [2024-07-15 07:55:07.276536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.544 07:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:23.544 07:55:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:23.544 07:55:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3RX0f3c7G4 00:19:23.544 [2024-07-15 07:55:08.094091] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:23.544 [2024-07-15 07:55:08.094157] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:23.544 TLSTESTn1 00:19:23.544 07:55:08 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:23.544 Running I/O for 10 seconds... 00:19:35.763 00:19:35.763 Latency(us) 00:19:35.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.763 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:35.763 Verification LBA range: start 0x0 length 0x2000 00:19:35.763 TLSTESTn1 : 10.02 5128.45 20.03 0.00 0.00 24917.85 4673.00 49237.48 00:19:35.763 =================================================================================================================== 00:19:35.763 Total : 5128.45 20.03 0.00 0.00 24917.85 4673.00 49237.48 00:19:35.763 0 00:19:35.763 07:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:35.763 07:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3281467 00:19:35.763 07:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3281467 ']' 00:19:35.763 07:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3281467 00:19:35.763 07:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:35.763 07:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:35.763 07:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3281467 00:19:35.763 07:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:35.763 07:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:35.763 07:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3281467' 00:19:35.763 killing process with pid 3281467 00:19:35.763 07:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3281467 00:19:35.763 Received shutdown signal, test time was about 10.000000 seconds 00:19:35.764 00:19:35.764 Latency(us) 00:19:35.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.764 =================================================================================================================== 00:19:35.764 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:35.764 [2024-07-15 07:55:18.377829] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:35.764 07:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3281467 00:19:35.764 07:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vJNOwzqedd 00:19:35.764 07:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:35.764 07:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vJNOwzqedd 00:19:35.764 07:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:35.764 07:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:35.764 07:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:35.764 07:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:35.764 07:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vJNOwzqedd 00:19:35.764 07:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:35.764 07:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:35.764 07:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:35.764 07:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.vJNOwzqedd' 00:19:35.764 07:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:35.764 07:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3283311 00:19:35.764 07:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:35.764 07:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:35.764 07:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3283311 /var/tmp/bdevperf.sock 00:19:35.764 07:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3283311 ']' 00:19:35.764 07:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:35.764 07:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:35.764 07:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:35.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:35.764 07:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:35.764 07:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.764 [2024-07-15 07:55:18.607357] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:19:35.764 [2024-07-15 07:55:18.607406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283311 ] 00:19:35.764 EAL: No free 2048 kB hugepages reported on node 1 00:19:35.764 [2024-07-15 07:55:18.665744] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.764 [2024-07-15 07:55:18.743819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vJNOwzqedd 00:19:35.764 [2024-07-15 07:55:19.565295] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:35.764 [2024-07-15 07:55:19.565360] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:35.764 [2024-07-15 07:55:19.574157] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:35.764 [2024-07-15 07:55:19.574537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x767570 (107): Transport endpoint is not connected 00:19:35.764 [2024-07-15 07:55:19.575530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x767570 (9): Bad file descriptor 00:19:35.764 [2024-07-15 07:55:19.576532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:35.764 [2024-07-15 07:55:19.576543] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:35.764 [2024-07-15 07:55:19.576553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:35.764 request: 00:19:35.764 { 00:19:35.764 "name": "TLSTEST", 00:19:35.764 "trtype": "tcp", 00:19:35.764 "traddr": "10.0.0.2", 00:19:35.764 "adrfam": "ipv4", 00:19:35.764 "trsvcid": "4420", 00:19:35.764 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.764 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:35.764 "prchk_reftag": false, 00:19:35.764 "prchk_guard": false, 00:19:35.764 "hdgst": false, 00:19:35.764 "ddgst": false, 00:19:35.764 "psk": "/tmp/tmp.vJNOwzqedd", 00:19:35.764 "method": "bdev_nvme_attach_controller", 00:19:35.764 "req_id": 1 00:19:35.764 } 00:19:35.764 Got JSON-RPC error response 00:19:35.764 response: 00:19:35.764 { 00:19:35.764 "code": -5, 00:19:35.764 "message": "Input/output error" 00:19:35.764 } 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3283311 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3283311 ']' 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3283311 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3283311 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3283311' 00:19:35.764 killing process with pid 3283311 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3283311 00:19:35.764 Received shutdown signal, test time was about 10.000000 seconds 00:19:35.764 00:19:35.764 Latency(us) 00:19:35.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.764 =================================================================================================================== 00:19:35.764 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:35.764 [2024-07-15 07:55:19.648299] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3283311 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3RX0f3c7G4 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3RX0f3c7G4 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3RX0f3c7G4 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3RX0f3c7G4' 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3283543 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3283543 /var/tmp/bdevperf.sock 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3283543 ']' 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:35.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:35.764 07:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.764 [2024-07-15 07:55:19.871899] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:19:35.764 [2024-07-15 07:55:19.871943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283543 ] 00:19:35.764 EAL: No free 2048 kB hugepages reported on node 1 00:19:35.764 [2024-07-15 07:55:19.926474] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.764 [2024-07-15 07:55:19.993284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.023 07:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.023 07:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:36.023 07:55:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.3RX0f3c7G4 00:19:36.282 [2024-07-15 07:55:20.847914] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:36.282 [2024-07-15 07:55:20.847991] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:36.282 [2024-07-15 07:55:20.857456] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:36.282 [2024-07-15 07:55:20.857480] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:36.282 [2024-07-15 07:55:20.857504] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:36.282 [2024-07-15 07:55:20.858283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1998570 (107): Transport endpoint is not connected 00:19:36.282 [2024-07-15 07:55:20.859276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1998570 (9): Bad file descriptor 00:19:36.282 [2024-07-15 07:55:20.860276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:36.282 [2024-07-15 07:55:20.860287] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:36.282 [2024-07-15 07:55:20.860296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:36.282 request: 00:19:36.282 { 00:19:36.282 "name": "TLSTEST", 00:19:36.282 "trtype": "tcp", 00:19:36.282 "traddr": "10.0.0.2", 00:19:36.282 "adrfam": "ipv4", 00:19:36.282 "trsvcid": "4420", 00:19:36.282 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.282 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:36.282 "prchk_reftag": false, 00:19:36.282 "prchk_guard": false, 00:19:36.282 "hdgst": false, 00:19:36.282 "ddgst": false, 00:19:36.282 "psk": "/tmp/tmp.3RX0f3c7G4", 00:19:36.282 "method": "bdev_nvme_attach_controller", 00:19:36.282 "req_id": 1 00:19:36.282 } 00:19:36.282 Got JSON-RPC error response 00:19:36.282 response: 00:19:36.282 { 00:19:36.282 "code": -5, 00:19:36.282 "message": "Input/output error" 00:19:36.282 } 00:19:36.282 07:55:20 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3283543 00:19:36.282 07:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3283543 ']' 00:19:36.282 07:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3283543 00:19:36.282 07:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:36.282 07:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:36.282 07:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3283543 00:19:36.282 07:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:36.282 07:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:36.282 07:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3283543' 00:19:36.282 killing process with pid 3283543 00:19:36.282 07:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3283543 00:19:36.282 Received shutdown signal, test time was about 10.000000 seconds 00:19:36.282 00:19:36.282 Latency(us) 00:19:36.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.282 =================================================================================================================== 00:19:36.282 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:36.282 [2024-07-15 07:55:20.930519] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:36.282 07:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3283543 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3RX0f3c7G4 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3RX0f3c7G4 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3RX0f3c7G4 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3RX0f3c7G4' 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3283787 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3283787 /var/tmp/bdevperf.sock 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3283787 ']' 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.542 07:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.542 [2024-07-15 07:55:21.154761] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:19:36.542 [2024-07-15 07:55:21.154806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283787 ] 00:19:36.542 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.542 [2024-07-15 07:55:21.211518] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.542 [2024-07-15 07:55:21.278250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.478 07:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:37.478 07:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:37.478 07:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3RX0f3c7G4 00:19:37.478 [2024-07-15 07:55:22.124766] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:37.478 [2024-07-15 07:55:22.124838] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:37.478 [2024-07-15 07:55:22.130688] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:37.478 [2024-07-15 07:55:22.130711] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:37.478 [2024-07-15 07:55:22.130738] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:37.478 [2024-07-15 07:55:22.131017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f3570 (107): Transport endpoint is not connected 00:19:37.478 [2024-07-15 07:55:22.132010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f3570 (9): Bad file descriptor 00:19:37.478 [2024-07-15 07:55:22.133012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:37.478 [2024-07-15 07:55:22.133023] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:37.478 [2024-07-15 07:55:22.133032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:37.478 request: 00:19:37.478 { 00:19:37.478 "name": "TLSTEST", 00:19:37.478 "trtype": "tcp", 00:19:37.478 "traddr": "10.0.0.2", 00:19:37.478 "adrfam": "ipv4", 00:19:37.478 "trsvcid": "4420", 00:19:37.478 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:37.478 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:37.478 "prchk_reftag": false, 00:19:37.478 "prchk_guard": false, 00:19:37.478 "hdgst": false, 00:19:37.478 "ddgst": false, 00:19:37.478 "psk": "/tmp/tmp.3RX0f3c7G4", 00:19:37.478 "method": "bdev_nvme_attach_controller", 00:19:37.478 "req_id": 1 00:19:37.478 } 00:19:37.478 Got JSON-RPC error response 00:19:37.478 response: 00:19:37.478 { 00:19:37.478 "code": -5, 00:19:37.478 "message": "Input/output error" 00:19:37.478 } 00:19:37.478 07:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3283787 00:19:37.478 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3283787 ']' 00:19:37.478 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3283787 00:19:37.478 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:37.478 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:37.478 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3283787 00:19:37.478 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:37.478 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:37.478 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3283787' 00:19:37.478 killing process with pid 3283787 00:19:37.478 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3283787 00:19:37.478 Received shutdown signal, test time was about 10.000000 seconds 00:19:37.478 00:19:37.478 Latency(us) 00:19:37.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.478 =================================================================================================================== 00:19:37.478 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:37.478 [2024-07-15 07:55:22.204159] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:37.478 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3283787 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3284022 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3284022 /var/tmp/bdevperf.sock 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3284022 ']' 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:37.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:37.738 07:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.738 [2024-07-15 07:55:22.424126] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:19:37.738 [2024-07-15 07:55:22.424174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284022 ] 00:19:37.738 EAL: No free 2048 kB hugepages reported on node 1 00:19:37.738 [2024-07-15 07:55:22.485759] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.996 [2024-07-15 07:55:22.552814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:38.564 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:38.564 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:38.564 07:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:38.823 [2024-07-15 07:55:23.393906] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:38.823 [2024-07-15 07:55:23.395460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd9af0 (9): Bad file descriptor 00:19:38.823 [2024-07-15 07:55:23.396458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:38.823 [2024-07-15 07:55:23.396469] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:38.823 [2024-07-15 07:55:23.396478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:38.823 request: 00:19:38.823 { 00:19:38.823 "name": "TLSTEST", 00:19:38.823 "trtype": "tcp", 00:19:38.823 "traddr": "10.0.0.2", 00:19:38.823 "adrfam": "ipv4", 00:19:38.823 "trsvcid": "4420", 00:19:38.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:38.823 "prchk_reftag": false, 00:19:38.823 "prchk_guard": false, 00:19:38.823 "hdgst": false, 00:19:38.823 "ddgst": false, 00:19:38.823 "method": "bdev_nvme_attach_controller", 00:19:38.823 "req_id": 1 00:19:38.823 } 00:19:38.823 Got JSON-RPC error response 00:19:38.823 response: 00:19:38.823 { 00:19:38.823 "code": -5, 00:19:38.823 "message": "Input/output error" 00:19:38.823 } 00:19:38.823 07:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3284022 00:19:38.823 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3284022 ']' 00:19:38.823 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3284022 00:19:38.823 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:38.823 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:38.823 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3284022 00:19:38.823 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:38.823 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:38.823 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3284022' 00:19:38.823 killing process with pid 3284022 00:19:38.823 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3284022 00:19:38.823 Received shutdown signal, test time was about 10.000000 seconds 00:19:38.823 00:19:38.823 Latency(us) 00:19:38.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.823 =================================================================================================================== 00:19:38.823 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:38.823 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3284022 00:19:39.081 07:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:39.081 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:39.081 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:39.082 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:39.082 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:39.082 07:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3279116 00:19:39.082 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3279116 ']' 00:19:39.082 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3279116 00:19:39.082 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:39.082 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:39.082 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3279116 00:19:39.082 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:39.082 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:39.082 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3279116' 00:19:39.082 killing process with pid 3279116 00:19:39.082 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3279116 00:19:39.082 [2024-07-15 07:55:23.686289] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:39.082 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3279116 00:19:39.341 07:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:39.341 07:55:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:39.341 07:55:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:39.341 07:55:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:39.341 07:55:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:39.341 07:55:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:39.341 07:55:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:39.341 07:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:39.341 07:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:39.341 07:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.i8dR3033D9 00:19:39.341 07:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:39.341 07:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.i8dR3033D9 00:19:39.341 07:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:39.341 07:55:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:39.341 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:39.341 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.341 07:55:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3284273 00:19:39.341 07:55:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3284273 00:19:39.341 07:55:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:39.341 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3284273 ']' 00:19:39.341 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.341 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:39.341 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.341 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:39.341 07:55:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.341 [2024-07-15 07:55:23.987593] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:19:39.341 [2024-07-15 07:55:23.987641] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.341 EAL: No free 2048 kB hugepages reported on node 1 00:19:39.341 [2024-07-15 07:55:24.042919] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.599 [2024-07-15 07:55:24.119003] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.599 [2024-07-15 07:55:24.119042] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.599 [2024-07-15 07:55:24.119050] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.599 [2024-07-15 07:55:24.119056] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.599 [2024-07-15 07:55:24.119061] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.599 [2024-07-15 07:55:24.119077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.167 07:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:40.167 07:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:40.167 07:55:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:40.167 07:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:40.167 07:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.167 07:55:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.167 07:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.i8dR3033D9 00:19:40.167 07:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.i8dR3033D9 00:19:40.167 07:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:40.426 [2024-07-15 07:55:25.009128] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.426 07:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:40.685 07:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:40.685 [2024-07-15 07:55:25.374049] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:40.685 [2024-07-15 07:55:25.374231] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:40.685 07:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:40.944 malloc0 00:19:40.944 07:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:41.204 07:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.i8dR3033D9 00:19:41.204 [2024-07-15 07:55:25.907513] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:41.204 07:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.i8dR3033D9 00:19:41.204 07:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:41.204 07:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:41.204 07:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:41.204 07:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.i8dR3033D9' 00:19:41.204 07:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:41.204 07:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:41.204 07:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3284531 00:19:41.204 07:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:41.204 07:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3284531 /var/tmp/bdevperf.sock 00:19:41.204 07:55:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3284531 ']' 00:19:41.204 07:55:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:41.204 07:55:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:41.204 07:55:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:41.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:41.204 07:55:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:41.204 07:55:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.464 [2024-07-15 07:55:25.967076] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:19:41.464 [2024-07-15 07:55:25.967137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284531 ] 00:19:41.464 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.464 [2024-07-15 07:55:26.035266] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.464 [2024-07-15 07:55:26.114807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.401 07:55:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:42.401 07:55:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:42.401 07:55:26 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.i8dR3033D9 00:19:42.401 [2024-07-15 07:55:26.948927] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:42.401 [2024-07-15 07:55:26.949007] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:42.401 TLSTESTn1 00:19:42.401 07:55:27 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:42.401 Running I/O for 10 seconds... 00:19:54.611 00:19:54.611 Latency(us) 00:19:54.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.611 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:54.611 Verification LBA range: start 0x0 length 0x2000 00:19:54.611 TLSTESTn1 : 10.01 5525.34 21.58 0.00 0.00 23130.11 6069.20 22909.11 00:19:54.611 =================================================================================================================== 00:19:54.611 Total : 5525.34 21.58 0.00 0.00 23130.11 6069.20 22909.11 00:19:54.611 0 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3284531 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3284531 ']' 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3284531 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3284531 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3284531' 00:19:54.611 killing process with pid 3284531 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3284531 00:19:54.611 Received shutdown signal, test time was about 10.000000 seconds 00:19:54.611 00:19:54.611 Latency(us) 00:19:54.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.611 =================================================================================================================== 00:19:54.611 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:54.611 [2024-07-15 07:55:37.229707] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3284531 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.i8dR3033D9 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.i8dR3033D9 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.i8dR3033D9 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.i8dR3033D9 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.i8dR3033D9' 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3286403 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3286403 /var/tmp/bdevperf.sock 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3286403 ']' 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:54.611 07:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.611 [2024-07-15 07:55:37.463113] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:19:54.612 [2024-07-15 07:55:37.463161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286403 ] 00:19:54.612 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.612 [2024-07-15 07:55:37.530007] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.612 [2024-07-15 07:55:37.606236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.i8dR3033D9 00:19:54.612 [2024-07-15 07:55:38.415827] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:54.612 [2024-07-15 07:55:38.415876] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:54.612 [2024-07-15 07:55:38.415883] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.i8dR3033D9 00:19:54.612 request: 00:19:54.612 { 00:19:54.612 "name": "TLSTEST", 00:19:54.612 "trtype": "tcp", 00:19:54.612 "traddr": "10.0.0.2", 00:19:54.612 "adrfam": "ipv4", 00:19:54.612 "trsvcid": "4420", 00:19:54.612 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.612 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:54.612 "prchk_reftag": false, 00:19:54.612 "prchk_guard": false, 00:19:54.612 "hdgst": false, 00:19:54.612 "ddgst": false, 00:19:54.612 "psk": "/tmp/tmp.i8dR3033D9", 00:19:54.612 "method": "bdev_nvme_attach_controller", 00:19:54.612 "req_id": 1 00:19:54.612 } 00:19:54.612 Got JSON-RPC error response 00:19:54.612 response: 00:19:54.612 { 00:19:54.612 "code": -1, 00:19:54.612 "message": "Operation not permitted" 00:19:54.612 } 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3286403 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3286403 ']' 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3286403 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3286403 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3286403' 00:19:54.612 killing process with pid 3286403 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3286403 00:19:54.612 Received shutdown signal, test time was about 10.000000 seconds 00:19:54.612 00:19:54.612 Latency(us) 00:19:54.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.612 =================================================================================================================== 00:19:54.612 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3286403 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3284273 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3284273 ']' 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3284273 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3284273 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3284273' 00:19:54.612 killing process with pid 3284273 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3284273 00:19:54.612 [2024-07-15 07:55:38.706958] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3284273 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3286689 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3286689 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3286689 ']' 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:54.612 07:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.612 [2024-07-15 07:55:38.953870] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:19:54.612 [2024-07-15 07:55:38.953918] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.612 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.612 [2024-07-15 07:55:39.026099] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.612 [2024-07-15 07:55:39.104846] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:54.612 [2024-07-15 07:55:39.104886] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:54.612 [2024-07-15 07:55:39.104893] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:54.612 [2024-07-15 07:55:39.104899] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:54.612 [2024-07-15 07:55:39.104908] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:54.612 [2024-07-15 07:55:39.104925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.181 07:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:55.181 07:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:55.181 07:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:55.181 07:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:55.181 07:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.181 07:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.181 07:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.i8dR3033D9 00:19:55.181 07:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:55.181 07:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.i8dR3033D9 00:19:55.181 07:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:19:55.181 07:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.181 07:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:19:55.181 07:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.181 07:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.i8dR3033D9 00:19:55.181 07:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.i8dR3033D9 00:19:55.181 07:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:55.439 [2024-07-15 07:55:39.963884] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.439 07:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:55.439 07:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:55.698 [2024-07-15 07:55:40.328844] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:55.698 [2024-07-15 07:55:40.329027] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.698 07:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:55.958 malloc0 00:19:55.958 07:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:56.217 07:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.i8dR3033D9 00:19:56.217 [2024-07-15 07:55:40.882566] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:56.217 [2024-07-15 07:55:40.882593] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:56.217 [2024-07-15 07:55:40.882615] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:56.217 request: 00:19:56.217 { 00:19:56.217 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.217 "host": "nqn.2016-06.io.spdk:host1", 00:19:56.217 "psk": "/tmp/tmp.i8dR3033D9", 00:19:56.217 "method": "nvmf_subsystem_add_host", 00:19:56.217 "req_id": 1 00:19:56.217 } 00:19:56.217 Got JSON-RPC error response 00:19:56.217 response: 00:19:56.217 { 00:19:56.217 "code": -32603, 00:19:56.217 "message": "Internal error" 00:19:56.217 } 00:19:56.217 07:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:56.217 07:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:56.217 07:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:56.217 07:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:56.217 07:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3286689 00:19:56.217 07:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3286689 ']' 00:19:56.217 07:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3286689 00:19:56.217 07:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:56.217 07:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:56.217 07:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3286689 00:19:56.217 07:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:56.217 07:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:56.217 07:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3286689' 00:19:56.217 killing process with pid 3286689 00:19:56.217 07:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3286689 00:19:56.217 07:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3286689 00:19:56.475 07:55:41 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.i8dR3033D9 00:19:56.475 07:55:41 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:56.475 07:55:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:56.475 07:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:56.475 07:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.475 07:55:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:56.475 07:55:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3287101 00:19:56.475 07:55:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3287101 00:19:56.475 07:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3287101 ']' 00:19:56.475 07:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.475 07:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:56.475 07:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.475 07:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:56.475 07:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.475 [2024-07-15 07:55:41.191978] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:19:56.475 [2024-07-15 07:55:41.192022] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.475 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.733 [2024-07-15 07:55:41.259836] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.733 [2024-07-15 07:55:41.337025] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.733 [2024-07-15 07:55:41.337062] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.733 [2024-07-15 07:55:41.337070] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.733 [2024-07-15 07:55:41.337076] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.733 [2024-07-15 07:55:41.337081] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.733 [2024-07-15 07:55:41.337110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.337 07:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:57.337 07:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:57.337 07:55:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:57.337 07:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:57.337 07:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.337 07:55:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.337 07:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.i8dR3033D9 00:19:57.337 07:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.i8dR3033D9 00:19:57.337 07:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:57.616 [2024-07-15 07:55:42.211526] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.616 07:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:57.875 07:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:57.875 [2024-07-15 07:55:42.576444] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:57.875 [2024-07-15 07:55:42.576621] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.875 07:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:58.134 malloc0 00:19:58.134 07:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:58.392 07:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.i8dR3033D9 00:19:58.392 [2024-07-15 07:55:43.122008] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:58.651 07:55:43 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3287449 00:19:58.651 07:55:43 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:58.651 07:55:43 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:58.651 07:55:43 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3287449 /var/tmp/bdevperf.sock 00:19:58.651 07:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3287449 ']' 00:19:58.651 07:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.651 07:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:58.651 07:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.651 07:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:58.651 07:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.651 [2024-07-15 07:55:43.197401] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:19:58.651 [2024-07-15 07:55:43.197449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287449 ] 00:19:58.651 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.651 [2024-07-15 07:55:43.252330] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.651 [2024-07-15 07:55:43.333276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.587 07:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:59.587 07:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:59.587 07:55:44 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.i8dR3033D9 00:19:59.587 [2024-07-15 07:55:44.160275] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:59.587 [2024-07-15 07:55:44.160350] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:59.587 TLSTESTn1 00:19:59.587 07:55:44 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:59.846 07:55:44 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:59.846 "subsystems": [ 00:19:59.846 { 00:19:59.846 "subsystem": "keyring", 00:19:59.846 "config": [] 00:19:59.846 }, 00:19:59.846 { 00:19:59.846 "subsystem": "iobuf", 00:19:59.846 "config": [ 00:19:59.846 { 00:19:59.846 "method": "iobuf_set_options", 00:19:59.846 "params": { 00:19:59.846 "small_pool_count": 8192, 00:19:59.846 "large_pool_count": 1024, 00:19:59.846 "small_bufsize": 8192, 00:19:59.846 "large_bufsize": 135168 00:19:59.846 } 00:19:59.846 } 00:19:59.846 ] 00:19:59.846 }, 00:19:59.846 { 00:19:59.846 "subsystem": "sock", 00:19:59.846 "config": [ 00:19:59.846 { 00:19:59.846 "method": "sock_set_default_impl", 00:19:59.846 "params": { 00:19:59.846 "impl_name": "posix" 00:19:59.846 } 00:19:59.846 }, 00:19:59.846 { 00:19:59.846 "method": "sock_impl_set_options", 00:19:59.846 "params": { 00:19:59.846 "impl_name": "ssl", 00:19:59.846 "recv_buf_size": 4096, 00:19:59.846 "send_buf_size": 4096, 00:19:59.846 "enable_recv_pipe": true, 00:19:59.846 "enable_quickack": false, 00:19:59.846 "enable_placement_id": 0, 00:19:59.846 "enable_zerocopy_send_server": true, 00:19:59.846 "enable_zerocopy_send_client": false, 00:19:59.846 "zerocopy_threshold": 0, 00:19:59.846 "tls_version": 0, 00:19:59.846 "enable_ktls": false 00:19:59.846 } 00:19:59.846 }, 00:19:59.846 { 00:19:59.846 "method": "sock_impl_set_options", 00:19:59.846 "params": { 00:19:59.846 "impl_name": "posix", 00:19:59.846 "recv_buf_size": 2097152, 00:19:59.846 "send_buf_size": 2097152, 00:19:59.846 "enable_recv_pipe": true, 00:19:59.847 "enable_quickack": false, 00:19:59.847 "enable_placement_id": 0, 00:19:59.847 "enable_zerocopy_send_server": true, 00:19:59.847 "enable_zerocopy_send_client": false, 00:19:59.847 "zerocopy_threshold": 0, 00:19:59.847 "tls_version": 0, 00:19:59.847 "enable_ktls": false 00:19:59.847 } 00:19:59.847 } 00:19:59.847 ] 00:19:59.847 }, 00:19:59.847 { 00:19:59.847 "subsystem": "vmd", 00:19:59.847 "config": [] 00:19:59.847 }, 00:19:59.847 { 00:19:59.847 "subsystem": "accel", 00:19:59.847 "config": [ 00:19:59.847 { 00:19:59.847 "method": "accel_set_options", 00:19:59.847 "params": { 00:19:59.847 "small_cache_size": 128, 00:19:59.847 "large_cache_size": 16, 00:19:59.847 "task_count": 2048, 00:19:59.847 "sequence_count": 2048, 00:19:59.847 "buf_count": 2048 00:19:59.847 } 00:19:59.847 } 00:19:59.847 ] 00:19:59.847 }, 00:19:59.847 { 00:19:59.847 "subsystem": "bdev", 00:19:59.847 "config": [ 00:19:59.847 { 00:19:59.847 "method": "bdev_set_options", 00:19:59.847 "params": { 00:19:59.847 "bdev_io_pool_size": 65535, 00:19:59.847 "bdev_io_cache_size": 256, 00:19:59.847 "bdev_auto_examine": true, 00:19:59.847 "iobuf_small_cache_size": 128, 00:19:59.847 "iobuf_large_cache_size": 16 00:19:59.847 } 00:19:59.847 }, 00:19:59.847 { 00:19:59.847 "method": "bdev_raid_set_options", 00:19:59.847 "params": { 00:19:59.847 "process_window_size_kb": 1024 00:19:59.847 } 00:19:59.847 }, 00:19:59.847 { 00:19:59.847 "method": "bdev_iscsi_set_options", 00:19:59.847 "params": { 00:19:59.847 "timeout_sec": 30 00:19:59.847 } 00:19:59.847 }, 00:19:59.847 { 00:19:59.847 "method": "bdev_nvme_set_options", 00:19:59.847 "params": { 00:19:59.847 "action_on_timeout": "none", 00:19:59.847 "timeout_us": 0, 00:19:59.847 "timeout_admin_us": 0, 00:19:59.847 "keep_alive_timeout_ms": 10000, 00:19:59.847 "arbitration_burst": 0, 00:19:59.847 "low_priority_weight": 0, 00:19:59.847 "medium_priority_weight": 0, 00:19:59.847 "high_priority_weight": 0, 00:19:59.847 "nvme_adminq_poll_period_us": 10000, 00:19:59.847 "nvme_ioq_poll_period_us": 0, 00:19:59.847 "io_queue_requests": 0, 00:19:59.847 "delay_cmd_submit": true, 00:19:59.847 "transport_retry_count": 4, 00:19:59.847 "bdev_retry_count": 3, 00:19:59.847 "transport_ack_timeout": 0, 00:19:59.847 "ctrlr_loss_timeout_sec": 0, 00:19:59.847 "reconnect_delay_sec": 0, 00:19:59.847 "fast_io_fail_timeout_sec": 0, 00:19:59.847 "disable_auto_failback": false, 00:19:59.847 "generate_uuids": false, 00:19:59.847 "transport_tos": 0, 00:19:59.847 "nvme_error_stat": false, 00:19:59.847 "rdma_srq_size": 0, 00:19:59.847 "io_path_stat": false, 00:19:59.847 "allow_accel_sequence": false, 00:19:59.847 "rdma_max_cq_size": 0, 00:19:59.847 "rdma_cm_event_timeout_ms": 0, 00:19:59.847 "dhchap_digests": [ 00:19:59.847 "sha256", 00:19:59.847 "sha384", 00:19:59.847 "sha512" 00:19:59.847 ], 00:19:59.847 "dhchap_dhgroups": [ 00:19:59.847 "null", 00:19:59.847 "ffdhe2048", 00:19:59.847 "ffdhe3072", 00:19:59.847 "ffdhe4096", 00:19:59.847 "ffdhe6144", 00:19:59.847 "ffdhe8192" 00:19:59.847 ] 00:19:59.847 } 00:19:59.847 }, 00:19:59.847 { 00:19:59.847 "method": "bdev_nvme_set_hotplug", 00:19:59.847 "params": { 00:19:59.847 "period_us": 100000, 00:19:59.847 "enable": false 00:19:59.847 } 00:19:59.847 }, 00:19:59.847 { 00:19:59.847 "method": "bdev_malloc_create", 00:19:59.847 "params": { 00:19:59.847 "name": "malloc0", 00:19:59.847 "num_blocks": 8192, 00:19:59.847 "block_size": 4096, 00:19:59.847 "physical_block_size": 4096, 00:19:59.847 "uuid": "12650ae4-eaa2-4b3f-bedb-7acfe91a942b", 00:19:59.847 "optimal_io_boundary": 0 00:19:59.847 } 00:19:59.847 }, 00:19:59.847 { 00:19:59.847 "method": "bdev_wait_for_examine" 00:19:59.847 } 00:19:59.847 ] 00:19:59.847 }, 00:19:59.847 { 00:19:59.847 "subsystem": "nbd", 00:19:59.847 "config": [] 00:19:59.847 }, 00:19:59.847 { 00:19:59.847 "subsystem": "scheduler", 00:19:59.847 "config": [ 00:19:59.847 { 00:19:59.847 "method": "framework_set_scheduler", 00:19:59.847 "params": { 00:19:59.847 "name": "static" 00:19:59.847 } 00:19:59.847 } 00:19:59.847 ] 00:19:59.847 }, 00:19:59.847 { 00:19:59.847 "subsystem": "nvmf", 00:19:59.847 "config": [ 00:19:59.847 { 00:19:59.847 "method": "nvmf_set_config", 00:19:59.847 "params": { 00:19:59.847 "discovery_filter": "match_any", 00:19:59.847 "admin_cmd_passthru": { 00:19:59.847 "identify_ctrlr": false 00:19:59.847 } 00:19:59.847 } 00:19:59.847 }, 00:19:59.847 { 00:19:59.847 "method": "nvmf_set_max_subsystems", 00:19:59.847 "params": { 00:19:59.847 "max_subsystems": 1024 00:19:59.847 } 00:19:59.847 }, 00:19:59.847 { 00:19:59.847 "method": "nvmf_set_crdt", 00:19:59.847 "params": { 00:19:59.847 "crdt1": 0, 00:19:59.847 "crdt2": 0, 00:19:59.847 "crdt3": 0 00:19:59.847 } 00:19:59.847 }, 00:19:59.847 { 00:19:59.847 "method": "nvmf_create_transport", 00:19:59.847 "params": { 00:19:59.847 "trtype": "TCP", 00:19:59.847 "max_queue_depth": 128, 00:19:59.847 "max_io_qpairs_per_ctrlr": 127, 00:19:59.847 "in_capsule_data_size": 4096, 00:19:59.847 "max_io_size": 131072, 00:19:59.848 "io_unit_size": 131072, 00:19:59.848 "max_aq_depth": 128, 00:19:59.848 "num_shared_buffers": 511, 00:19:59.848 "buf_cache_size": 4294967295, 00:19:59.848 "dif_insert_or_strip": false, 00:19:59.848 "zcopy": false, 00:19:59.848 "c2h_success": false, 00:19:59.848 "sock_priority": 0, 00:19:59.848 "abort_timeout_sec": 1, 00:19:59.848 "ack_timeout": 0, 00:19:59.848 "data_wr_pool_size": 0 00:19:59.848 } 00:19:59.848 }, 00:19:59.848 { 00:19:59.848 "method": "nvmf_create_subsystem", 00:19:59.848 "params": { 00:19:59.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.848 "allow_any_host": false, 00:19:59.848 "serial_number": "SPDK00000000000001", 00:19:59.848 "model_number": "SPDK bdev Controller", 00:19:59.848 "max_namespaces": 10, 00:19:59.848 "min_cntlid": 1, 00:19:59.848 "max_cntlid": 65519, 00:19:59.848 "ana_reporting": false 00:19:59.848 } 00:19:59.848 }, 00:19:59.848 { 00:19:59.848 "method": "nvmf_subsystem_add_host", 00:19:59.848 "params": { 00:19:59.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.848 "host": "nqn.2016-06.io.spdk:host1", 00:19:59.848 "psk": "/tmp/tmp.i8dR3033D9" 00:19:59.848 } 00:19:59.848 }, 00:19:59.848 { 00:19:59.848 "method": "nvmf_subsystem_add_ns", 00:19:59.848 "params": { 00:19:59.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.848 "namespace": { 00:19:59.848 "nsid": 1, 00:19:59.848 "bdev_name": "malloc0", 00:19:59.848 "nguid": "12650AE4EAA24B3FBEDB7ACFE91A942B", 00:19:59.848 "uuid": "12650ae4-eaa2-4b3f-bedb-7acfe91a942b", 00:19:59.848 "no_auto_visible": false 00:19:59.848 } 00:19:59.848 } 00:19:59.848 }, 00:19:59.848 { 00:19:59.848 "method": "nvmf_subsystem_add_listener", 00:19:59.848 "params": { 00:19:59.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.848 "listen_address": { 00:19:59.848 "trtype": "TCP", 00:19:59.848 "adrfam": "IPv4", 00:19:59.848 "traddr": "10.0.0.2", 00:19:59.848 "trsvcid": "4420" 00:19:59.848 }, 00:19:59.848 "secure_channel": true 00:19:59.848 } 00:19:59.848 } 00:19:59.848 ] 00:19:59.848 } 00:19:59.848 ] 00:19:59.848 }' 00:19:59.848 07:55:44 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:00.108 07:55:44 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:00.108 "subsystems": [ 00:20:00.108 { 00:20:00.108 "subsystem": "keyring", 00:20:00.108 "config": [] 00:20:00.108 }, 00:20:00.108 { 00:20:00.108 "subsystem": "iobuf", 00:20:00.108 "config": [ 00:20:00.108 { 00:20:00.108 "method": "iobuf_set_options", 00:20:00.108 "params": { 00:20:00.108 "small_pool_count": 8192, 00:20:00.108 "large_pool_count": 1024, 00:20:00.108 "small_bufsize": 8192, 00:20:00.108 "large_bufsize": 135168 00:20:00.108 } 00:20:00.108 } 00:20:00.108 ] 00:20:00.108 }, 00:20:00.108 { 00:20:00.108 "subsystem": "sock", 00:20:00.108 "config": [ 00:20:00.108 { 00:20:00.108 "method": "sock_set_default_impl", 00:20:00.108 "params": { 00:20:00.108 "impl_name": "posix" 00:20:00.108 } 00:20:00.108 }, 00:20:00.108 { 00:20:00.108 "method": "sock_impl_set_options", 00:20:00.108 "params": { 00:20:00.108 "impl_name": "ssl", 00:20:00.108 "recv_buf_size": 4096, 00:20:00.108 "send_buf_size": 4096, 00:20:00.108 "enable_recv_pipe": true, 00:20:00.108 "enable_quickack": false, 00:20:00.108 "enable_placement_id": 0, 00:20:00.108 "enable_zerocopy_send_server": true, 00:20:00.108 "enable_zerocopy_send_client": false, 00:20:00.108 "zerocopy_threshold": 0, 00:20:00.108 "tls_version": 0, 00:20:00.108 "enable_ktls": false 00:20:00.108 } 00:20:00.108 }, 00:20:00.108 { 00:20:00.108 "method": "sock_impl_set_options", 00:20:00.108 "params": { 00:20:00.108 "impl_name": "posix", 00:20:00.108 "recv_buf_size": 2097152, 00:20:00.108 "send_buf_size": 2097152, 00:20:00.108 "enable_recv_pipe": true, 00:20:00.108 "enable_quickack": false, 00:20:00.108 "enable_placement_id": 0, 00:20:00.108 "enable_zerocopy_send_server": true, 00:20:00.108 "enable_zerocopy_send_client": false, 00:20:00.108 "zerocopy_threshold": 0, 00:20:00.108 "tls_version": 0, 00:20:00.108 "enable_ktls": false 00:20:00.108 } 00:20:00.108 } 00:20:00.108 ] 00:20:00.108 }, 00:20:00.108 { 00:20:00.108 "subsystem": "vmd", 00:20:00.108 "config": [] 00:20:00.108 }, 00:20:00.108 { 00:20:00.108 "subsystem": "accel", 00:20:00.108 "config": [ 00:20:00.108 { 00:20:00.108 "method": "accel_set_options", 00:20:00.108 "params": { 00:20:00.108 "small_cache_size": 128, 00:20:00.108 "large_cache_size": 16, 00:20:00.108 "task_count": 2048, 00:20:00.108 "sequence_count": 2048, 00:20:00.108 "buf_count": 2048 00:20:00.108 } 00:20:00.108 } 00:20:00.108 ] 00:20:00.108 }, 00:20:00.108 { 00:20:00.108 "subsystem": "bdev", 00:20:00.108 "config": [ 00:20:00.108 { 00:20:00.108 "method": "bdev_set_options", 00:20:00.108 "params": { 00:20:00.108 "bdev_io_pool_size": 65535, 00:20:00.108 "bdev_io_cache_size": 256, 00:20:00.108 "bdev_auto_examine": true, 00:20:00.108 "iobuf_small_cache_size": 128, 00:20:00.108 "iobuf_large_cache_size": 16 00:20:00.108 } 00:20:00.108 }, 00:20:00.108 { 00:20:00.108 "method": "bdev_raid_set_options", 00:20:00.108 "params": { 00:20:00.108 "process_window_size_kb": 1024 00:20:00.108 } 00:20:00.108 }, 00:20:00.108 { 00:20:00.108 "method": "bdev_iscsi_set_options", 00:20:00.108 "params": { 00:20:00.108 "timeout_sec": 30 00:20:00.108 } 00:20:00.108 }, 00:20:00.108 { 00:20:00.108 "method": "bdev_nvme_set_options", 00:20:00.108 "params": { 00:20:00.108 "action_on_timeout": "none", 00:20:00.108 "timeout_us": 0, 00:20:00.108 "timeout_admin_us": 0, 00:20:00.108 "keep_alive_timeout_ms": 10000, 00:20:00.109 "arbitration_burst": 0, 00:20:00.109 "low_priority_weight": 0, 00:20:00.109 "medium_priority_weight": 0, 00:20:00.109 "high_priority_weight": 0, 00:20:00.109 "nvme_adminq_poll_period_us": 10000, 00:20:00.109 "nvme_ioq_poll_period_us": 0, 00:20:00.109 "io_queue_requests": 512, 00:20:00.109 "delay_cmd_submit": true, 00:20:00.109 "transport_retry_count": 4, 00:20:00.109 "bdev_retry_count": 3, 00:20:00.109 "transport_ack_timeout": 0, 00:20:00.109 "ctrlr_loss_timeout_sec": 0, 00:20:00.109 "reconnect_delay_sec": 0, 00:20:00.109 "fast_io_fail_timeout_sec": 0, 00:20:00.109 "disable_auto_failback": false, 00:20:00.109 "generate_uuids": false, 00:20:00.109 "transport_tos": 0, 00:20:00.109 "nvme_error_stat": false, 00:20:00.109 "rdma_srq_size": 0, 00:20:00.109 "io_path_stat": false, 00:20:00.109 "allow_accel_sequence": false, 00:20:00.109 "rdma_max_cq_size": 0, 00:20:00.109 "rdma_cm_event_timeout_ms": 0, 00:20:00.109 "dhchap_digests": [ 00:20:00.109 "sha256", 00:20:00.109 "sha384", 00:20:00.109 "sha512" 00:20:00.109 ], 00:20:00.109 "dhchap_dhgroups": [ 00:20:00.109 "null", 00:20:00.109 "ffdhe2048", 00:20:00.109 "ffdhe3072", 00:20:00.109 "ffdhe4096", 00:20:00.109 "ffdhe6144", 00:20:00.109 "ffdhe8192" 00:20:00.109 ] 00:20:00.109 } 00:20:00.109 }, 00:20:00.109 { 00:20:00.109 "method": "bdev_nvme_attach_controller", 00:20:00.109 "params": { 00:20:00.109 "name": "TLSTEST", 00:20:00.109 "trtype": "TCP", 00:20:00.109 "adrfam": "IPv4", 00:20:00.109 "traddr": "10.0.0.2", 00:20:00.109 "trsvcid": "4420", 00:20:00.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.109 "prchk_reftag": false, 00:20:00.109 "prchk_guard": false, 00:20:00.109 "ctrlr_loss_timeout_sec": 0, 00:20:00.109 "reconnect_delay_sec": 0, 00:20:00.109 "fast_io_fail_timeout_sec": 0, 00:20:00.109 "psk": "/tmp/tmp.i8dR3033D9", 00:20:00.109 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.109 "hdgst": false, 00:20:00.109 "ddgst": false 00:20:00.109 } 00:20:00.109 }, 00:20:00.109 { 00:20:00.109 "method": "bdev_nvme_set_hotplug", 00:20:00.109 "params": { 00:20:00.109 "period_us": 100000, 00:20:00.109 "enable": false 00:20:00.109 } 00:20:00.109 }, 00:20:00.109 { 00:20:00.109 "method": "bdev_wait_for_examine" 00:20:00.109 } 00:20:00.109 ] 00:20:00.109 }, 00:20:00.109 { 00:20:00.109 "subsystem": "nbd", 00:20:00.109 "config": [] 00:20:00.109 } 00:20:00.109 ] 00:20:00.109 }' 00:20:00.109 07:55:44 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3287449 00:20:00.109 07:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3287449 ']' 00:20:00.109 07:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3287449 00:20:00.109 07:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:00.109 07:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:00.109 07:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3287449 00:20:00.109 07:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:00.109 07:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:00.109 07:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3287449' 00:20:00.109 killing process with pid 3287449 00:20:00.109 07:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3287449 00:20:00.109 Received shutdown signal, test time was about 10.000000 seconds 00:20:00.109 00:20:00.109 Latency(us) 00:20:00.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.109 =================================================================================================================== 00:20:00.109 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:00.109 [2024-07-15 07:55:44.801818] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:00.109 07:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3287449 00:20:00.368 07:55:44 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3287101 00:20:00.368 07:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3287101 ']' 00:20:00.368 07:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3287101 00:20:00.368 07:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:00.368 07:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:00.368 07:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3287101 00:20:00.368 07:55:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:00.368 07:55:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:00.368 07:55:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3287101' 00:20:00.368 killing process with pid 3287101 00:20:00.368 07:55:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3287101 00:20:00.368 [2024-07-15 07:55:45.025619] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:00.368 07:55:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3287101 00:20:00.627 07:55:45 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:00.627 07:55:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:00.627 07:55:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:00.627 07:55:45 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:00.627 "subsystems": [ 00:20:00.627 { 00:20:00.627 "subsystem": "keyring", 00:20:00.627 "config": [] 00:20:00.627 }, 00:20:00.627 { 00:20:00.627 "subsystem": "iobuf", 00:20:00.627 "config": [ 00:20:00.627 { 00:20:00.627 "method": "iobuf_set_options", 00:20:00.627 "params": { 00:20:00.627 "small_pool_count": 8192, 00:20:00.627 "large_pool_count": 1024, 00:20:00.627 "small_bufsize": 8192, 00:20:00.627 "large_bufsize": 135168 00:20:00.627 } 00:20:00.627 } 00:20:00.627 ] 00:20:00.627 }, 00:20:00.627 { 00:20:00.627 "subsystem": "sock", 00:20:00.627 "config": [ 00:20:00.627 { 00:20:00.627 "method": "sock_set_default_impl", 00:20:00.627 "params": { 00:20:00.627 "impl_name": "posix" 00:20:00.627 } 00:20:00.627 }, 00:20:00.627 { 00:20:00.627 "method": "sock_impl_set_options", 00:20:00.627 "params": { 00:20:00.627 "impl_name": "ssl", 00:20:00.627 "recv_buf_size": 4096, 00:20:00.627 "send_buf_size": 4096, 00:20:00.627 "enable_recv_pipe": true, 00:20:00.627 "enable_quickack": false, 00:20:00.627 "enable_placement_id": 0, 00:20:00.627 "enable_zerocopy_send_server": true, 00:20:00.627 "enable_zerocopy_send_client": false, 00:20:00.627 "zerocopy_threshold": 0, 00:20:00.627 "tls_version": 0, 00:20:00.627 "enable_ktls": false 00:20:00.627 } 00:20:00.627 }, 00:20:00.627 { 00:20:00.627 "method": "sock_impl_set_options", 00:20:00.627 "params": { 00:20:00.627 "impl_name": "posix", 00:20:00.627 "recv_buf_size": 2097152, 00:20:00.627 "send_buf_size": 2097152, 00:20:00.627 "enable_recv_pipe": true, 00:20:00.627 "enable_quickack": false, 00:20:00.627 "enable_placement_id": 0, 00:20:00.627 "enable_zerocopy_send_server": true, 00:20:00.627 "enable_zerocopy_send_client": false, 00:20:00.627 "zerocopy_threshold": 0, 00:20:00.628 "tls_version": 0, 00:20:00.628 "enable_ktls": false 00:20:00.628 } 00:20:00.628 } 00:20:00.628 ] 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "subsystem": "vmd", 00:20:00.628 "config": [] 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "subsystem": "accel", 00:20:00.628 "config": [ 00:20:00.628 { 00:20:00.628 "method": "accel_set_options", 00:20:00.628 "params": { 00:20:00.628 "small_cache_size": 128, 00:20:00.628 "large_cache_size": 16, 00:20:00.628 "task_count": 2048, 00:20:00.628 "sequence_count": 2048, 00:20:00.628 "buf_count": 2048 00:20:00.628 } 00:20:00.628 } 00:20:00.628 ] 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "subsystem": "bdev", 00:20:00.628 "config": [ 00:20:00.628 { 00:20:00.628 "method": "bdev_set_options", 00:20:00.628 "params": { 00:20:00.628 "bdev_io_pool_size": 65535, 00:20:00.628 "bdev_io_cache_size": 256, 00:20:00.628 "bdev_auto_examine": true, 00:20:00.628 "iobuf_small_cache_size": 128, 00:20:00.628 "iobuf_large_cache_size": 16 00:20:00.628 } 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "method": "bdev_raid_set_options", 00:20:00.628 "params": { 00:20:00.628 "process_window_size_kb": 1024 00:20:00.628 } 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "method": "bdev_iscsi_set_options", 00:20:00.628 "params": { 00:20:00.628 "timeout_sec": 30 00:20:00.628 } 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "method": "bdev_nvme_set_options", 00:20:00.628 "params": { 00:20:00.628 "action_on_timeout": "none", 00:20:00.628 "timeout_us": 0, 00:20:00.628 "timeout_admin_us": 0, 00:20:00.628 "keep_alive_timeout_ms": 10000, 00:20:00.628 "arbitration_burst": 0, 00:20:00.628 "low_priority_weight": 0, 00:20:00.628 "medium_priority_weight": 0, 00:20:00.628 "high_priority_weight": 0, 00:20:00.628 "nvme_adminq_poll_period_us": 10000, 00:20:00.628 "nvme_ioq_poll_period_us": 0, 00:20:00.628 "io_queue_requests": 0, 00:20:00.628 "delay_cmd_submit": true, 00:20:00.628 "transport_retry_count": 4, 00:20:00.628 "bdev_retry_count": 3, 00:20:00.628 "transport_ack_timeout": 0, 00:20:00.628 "ctrlr_loss_timeout_sec": 0, 00:20:00.628 "reconnect_delay_sec": 0, 00:20:00.628 "fast_io_fail_timeout_sec": 0, 00:20:00.628 "disable_auto_failback": false, 00:20:00.628 "generate_uuids": false, 00:20:00.628 "transport_tos": 0, 00:20:00.628 "nvme_error_stat": false, 00:20:00.628 "rdma_srq_size": 0, 00:20:00.628 "io_path_stat": false, 00:20:00.628 "allow_accel_sequence": false, 00:20:00.628 "rdma_max_cq_size": 0, 00:20:00.628 "rdma_cm_event_timeout_ms": 0, 00:20:00.628 "dhchap_digests": [ 00:20:00.628 "sha256", 00:20:00.628 "sha384", 00:20:00.628 "sha512" 00:20:00.628 ], 00:20:00.628 "dhchap_dhgroups": [ 00:20:00.628 "null", 00:20:00.628 "ffdhe2048", 00:20:00.628 "ffdhe3072", 00:20:00.628 "ffdhe4096", 00:20:00.628 "ffdhe6144", 00:20:00.628 "ffdhe8192" 00:20:00.628 ] 00:20:00.628 } 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "method": "bdev_nvme_set_hotplug", 00:20:00.628 "params": { 00:20:00.628 "period_us": 100000, 00:20:00.628 "enable": false 00:20:00.628 } 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "method": "bdev_malloc_create", 00:20:00.628 "params": { 00:20:00.628 "name": "malloc0", 00:20:00.628 "num_blocks": 8192, 00:20:00.628 "block_size": 4096, 00:20:00.628 "physical_block_size": 4096, 00:20:00.628 "uuid": "12650ae4-eaa2-4b3f-bedb-7acfe91a942b", 00:20:00.628 "optimal_io_boundary": 0 00:20:00.628 } 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "method": "bdev_wait_for_examine" 00:20:00.628 } 00:20:00.628 ] 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "subsystem": "nbd", 00:20:00.628 "config": [] 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "subsystem": "scheduler", 00:20:00.628 "config": [ 00:20:00.628 { 00:20:00.628 "method": "framework_set_scheduler", 00:20:00.628 "params": { 00:20:00.628 "name": "static" 00:20:00.628 } 00:20:00.628 } 00:20:00.628 ] 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "subsystem": "nvmf", 00:20:00.628 "config": [ 00:20:00.628 { 00:20:00.628 "method": "nvmf_set_config", 00:20:00.628 "params": { 00:20:00.628 "discovery_filter": "match_any", 00:20:00.628 "admin_cmd_passthru": { 00:20:00.628 "identify_ctrlr": false 00:20:00.628 } 00:20:00.628 } 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "method": "nvmf_set_max_subsystems", 00:20:00.628 "params": { 00:20:00.628 "max_subsystems": 1024 00:20:00.628 } 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "method": "nvmf_set_crdt", 00:20:00.628 "params": { 00:20:00.628 "crdt1": 0, 00:20:00.628 "crdt2": 0, 00:20:00.628 "crdt3": 0 00:20:00.628 } 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "method": "nvmf_create_transport", 00:20:00.628 "params": { 00:20:00.628 "trtype": "TCP", 00:20:00.628 "max_queue_depth": 128, 00:20:00.628 "max_io_qpairs_per_ctrlr": 127, 00:20:00.628 "in_capsule_data_size": 4096, 00:20:00.628 "max_io_size": 131072, 00:20:00.628 "io_unit_size": 131072, 00:20:00.628 "max_aq_depth": 128, 00:20:00.628 "num_shared_buffers": 511, 00:20:00.628 "buf_cache_size": 4294967295, 00:20:00.628 "dif_insert_or_strip": false, 00:20:00.628 "zcopy": false, 00:20:00.628 "c2h_success": false, 00:20:00.628 "sock_priority": 0, 00:20:00.628 "abort_timeout_sec": 1, 00:20:00.628 "ack_timeout": 0, 00:20:00.628 "data_wr_pool_size": 0 00:20:00.628 } 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "method": "nvmf_create_subsystem", 00:20:00.628 "params": { 00:20:00.628 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.628 "allow_any_host": false, 00:20:00.628 "serial_number": "SPDK00000000000001", 00:20:00.628 "model_number": "SPDK bdev Controller", 00:20:00.628 "max_namespaces": 10, 00:20:00.628 "min_cntlid": 1, 00:20:00.628 "max_cntlid": 65519, 00:20:00.628 "ana_reporting": false 00:20:00.628 } 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "method": "nvmf_subsystem_add_host", 00:20:00.628 "params": { 00:20:00.628 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.628 "host": "nqn.2016-06.io.spdk:host1", 00:20:00.628 "psk": "/tmp/tmp.i8dR3033D9" 00:20:00.628 } 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "method": "nvmf_subsystem_add_ns", 00:20:00.628 "params": { 00:20:00.628 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.628 "namespace": { 00:20:00.628 "nsid": 1, 00:20:00.628 "bdev_name": "malloc0", 00:20:00.628 "nguid": "12650AE4EAA24B3FBEDB7ACFE91A942B", 00:20:00.628 "uuid": "12650ae4-eaa2-4b3f-bedb-7acfe91a942b", 00:20:00.628 "no_auto_visible": false 00:20:00.628 } 00:20:00.628 } 00:20:00.628 }, 00:20:00.628 { 00:20:00.628 "method": "nvmf_subsystem_add_listener", 00:20:00.628 "params": { 00:20:00.628 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.628 "listen_address": { 00:20:00.628 "trtype": "TCP", 00:20:00.628 "adrfam": "IPv4", 00:20:00.628 "traddr": "10.0.0.2", 00:20:00.628 "trsvcid": "4420" 00:20:00.628 }, 00:20:00.628 "secure_channel": true 00:20:00.628 } 00:20:00.628 } 00:20:00.628 ] 00:20:00.628 } 00:20:00.628 ] 00:20:00.628 }' 00:20:00.628 07:55:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.628 07:55:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3287833 00:20:00.628 07:55:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3287833 00:20:00.628 07:55:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:00.628 07:55:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3287833 ']' 00:20:00.628 07:55:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.628 07:55:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.628 07:55:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.628 07:55:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.628 07:55:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.628 [2024-07-15 07:55:45.267520] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:20:00.629 [2024-07-15 07:55:45.267563] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.629 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.629 [2024-07-15 07:55:45.327978] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.887 [2024-07-15 07:55:45.406112] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.887 [2024-07-15 07:55:45.406147] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.887 [2024-07-15 07:55:45.406155] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.887 [2024-07-15 07:55:45.406161] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.887 [2024-07-15 07:55:45.406166] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.887 [2024-07-15 07:55:45.406235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.887 [2024-07-15 07:55:45.609274] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.887 [2024-07-15 07:55:45.625257] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:01.147 [2024-07-15 07:55:45.641305] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:01.147 [2024-07-15 07:55:45.649567] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.407 07:55:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.407 07:55:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:01.407 07:55:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:01.407 07:55:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:01.407 07:55:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.407 07:55:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.407 07:55:46 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3288023 00:20:01.407 07:55:46 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3288023 /var/tmp/bdevperf.sock 00:20:01.407 07:55:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3288023 ']' 00:20:01.407 07:55:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.407 07:55:46 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:01.407 07:55:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:01.407 07:55:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.407 07:55:46 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:20:01.407 "subsystems": [ 00:20:01.407 { 00:20:01.407 "subsystem": "keyring", 00:20:01.407 "config": [] 00:20:01.407 }, 00:20:01.407 { 00:20:01.407 "subsystem": "iobuf", 00:20:01.407 "config": [ 00:20:01.407 { 00:20:01.407 "method": "iobuf_set_options", 00:20:01.407 "params": { 00:20:01.407 "small_pool_count": 8192, 00:20:01.407 "large_pool_count": 1024, 00:20:01.407 "small_bufsize": 8192, 00:20:01.407 "large_bufsize": 135168 00:20:01.407 } 00:20:01.407 } 00:20:01.407 ] 00:20:01.407 }, 00:20:01.407 { 00:20:01.407 "subsystem": "sock", 00:20:01.407 "config": [ 00:20:01.407 { 00:20:01.407 "method": "sock_set_default_impl", 00:20:01.407 "params": { 00:20:01.407 "impl_name": "posix" 00:20:01.407 } 00:20:01.407 }, 00:20:01.407 { 00:20:01.407 "method": "sock_impl_set_options", 00:20:01.407 "params": { 00:20:01.407 "impl_name": "ssl", 00:20:01.407 "recv_buf_size": 4096, 00:20:01.407 "send_buf_size": 4096, 00:20:01.407 "enable_recv_pipe": true, 00:20:01.407 "enable_quickack": false, 00:20:01.407 "enable_placement_id": 0, 00:20:01.407 "enable_zerocopy_send_server": true, 00:20:01.407 "enable_zerocopy_send_client": false, 00:20:01.407 "zerocopy_threshold": 0, 00:20:01.407 "tls_version": 0, 00:20:01.407 "enable_ktls": false 00:20:01.407 } 00:20:01.407 }, 00:20:01.407 { 00:20:01.407 "method": "sock_impl_set_options", 00:20:01.407 "params": { 00:20:01.407 "impl_name": "posix", 00:20:01.407 "recv_buf_size": 2097152, 00:20:01.407 "send_buf_size": 2097152, 00:20:01.407 "enable_recv_pipe": true, 00:20:01.407 "enable_quickack": false, 00:20:01.407 "enable_placement_id": 0, 00:20:01.407 "enable_zerocopy_send_server": true, 00:20:01.407 "enable_zerocopy_send_client": false, 00:20:01.407 "zerocopy_threshold": 0, 00:20:01.407 "tls_version": 0, 00:20:01.407 "enable_ktls": false 00:20:01.407 } 00:20:01.407 } 00:20:01.407 ] 00:20:01.407 }, 00:20:01.407 { 00:20:01.407 "subsystem": "vmd", 00:20:01.407 "config": [] 00:20:01.407 }, 00:20:01.407 { 00:20:01.407 "subsystem": "accel", 00:20:01.407 "config": [ 00:20:01.407 { 00:20:01.407 "method": "accel_set_options", 00:20:01.407 "params": { 00:20:01.407 "small_cache_size": 128, 00:20:01.407 "large_cache_size": 16, 00:20:01.407 "task_count": 2048, 00:20:01.407 "sequence_count": 2048, 00:20:01.407 "buf_count": 2048 00:20:01.407 } 00:20:01.407 } 00:20:01.407 ] 00:20:01.407 }, 00:20:01.407 { 00:20:01.407 "subsystem": "bdev", 00:20:01.407 "config": [ 00:20:01.407 { 00:20:01.407 "method": "bdev_set_options", 00:20:01.407 "params": { 00:20:01.407 "bdev_io_pool_size": 65535, 00:20:01.407 "bdev_io_cache_size": 256, 00:20:01.407 "bdev_auto_examine": true, 00:20:01.407 "iobuf_small_cache_size": 128, 00:20:01.407 "iobuf_large_cache_size": 16 00:20:01.407 } 00:20:01.407 }, 00:20:01.407 { 00:20:01.407 "method": "bdev_raid_set_options", 00:20:01.407 "params": { 00:20:01.407 "process_window_size_kb": 1024 00:20:01.407 } 00:20:01.407 }, 00:20:01.407 { 00:20:01.407 "method": "bdev_iscsi_set_options", 00:20:01.407 "params": { 00:20:01.407 "timeout_sec": 30 00:20:01.407 } 00:20:01.407 }, 00:20:01.407 { 00:20:01.407 "method": "bdev_nvme_set_options", 00:20:01.407 "params": { 00:20:01.407 "action_on_timeout": "none", 00:20:01.407 "timeout_us": 0, 00:20:01.407 "timeout_admin_us": 0, 00:20:01.407 "keep_alive_timeout_ms": 10000, 00:20:01.407 "arbitration_burst": 0, 00:20:01.407 "low_priority_weight": 0, 00:20:01.407 "medium_priority_weight": 0, 00:20:01.407 "high_priority_weight": 0, 00:20:01.407 "nvme_adminq_poll_period_us": 10000, 00:20:01.407 "nvme_ioq_poll_period_us": 0, 00:20:01.407 "io_queue_requests": 512, 00:20:01.407 "delay_cmd_submit": true, 00:20:01.407 "transport_retry_count": 4, 00:20:01.407 "bdev_retry_count": 3, 00:20:01.407 "transport_ack_timeout": 0, 00:20:01.407 "ctrlr_loss_timeout_sec": 0, 00:20:01.407 "reconnect_delay_sec": 0, 00:20:01.407 "fast_io_fail_timeout_sec": 0, 00:20:01.407 "disable_auto_failback": false, 00:20:01.407 "generate_uuids": false, 00:20:01.407 "transport_tos": 0, 00:20:01.407 "nvme_error_stat": false, 00:20:01.407 "rdma_srq_size": 0, 00:20:01.407 "io_path_stat": false, 00:20:01.407 "allow_accel_sequence": false, 00:20:01.407 "rdma_max_cq_size": 0, 00:20:01.407 "rdma_cm_event_timeout_ms": 0, 00:20:01.407 "dhchap_digests": [ 00:20:01.407 "sha256", 00:20:01.407 "sha384", 00:20:01.407 "sha512" 00:20:01.407 ], 00:20:01.407 "dhchap_dhgroups": [ 00:20:01.407 "null", 00:20:01.407 "ffdhe2048", 00:20:01.407 "ffdhe3072", 00:20:01.407 "ffdhe4096", 00:20:01.407 "ffdhe6144", 00:20:01.408 "ffdhe8192" 00:20:01.408 ] 00:20:01.408 } 00:20:01.408 }, 00:20:01.408 { 00:20:01.408 "method": "bdev_nvme_attach_controller", 00:20:01.408 "params": { 00:20:01.408 "name": "TLSTEST", 00:20:01.408 "trtype": "TCP", 00:20:01.408 "adrfam": "IPv4", 00:20:01.408 "traddr": "10.0.0.2", 00:20:01.408 "trsvcid": "4420", 00:20:01.408 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.408 "prchk_reftag": false, 00:20:01.408 "prchk_guard": false, 00:20:01.408 "ctrlr_loss_timeout_sec": 0, 00:20:01.408 "reconnect_delay_sec": 0, 00:20:01.408 "fast_io_fail_timeout_sec": 0, 00:20:01.408 "psk": "/tmp/tmp.i8dR3033D9", 00:20:01.408 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:01.408 "hdgst": false, 00:20:01.408 "ddgst": false 00:20:01.408 } 00:20:01.408 }, 00:20:01.408 { 00:20:01.408 "method": "bdev_nvme_set_hotplug", 00:20:01.408 "params": { 00:20:01.408 "period_us": 100000, 00:20:01.408 "enable": false 00:20:01.408 } 00:20:01.408 }, 00:20:01.408 { 00:20:01.408 "method": "bdev_wait_for_examine" 00:20:01.408 } 00:20:01.408 ] 00:20:01.408 }, 00:20:01.408 { 00:20:01.408 "subsystem": "nbd", 00:20:01.408 "config": [] 00:20:01.408 } 00:20:01.408 ] 00:20:01.408 }' 00:20:01.408 07:55:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:01.408 07:55:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.408 [2024-07-15 07:55:46.151601] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:20:01.408 [2024-07-15 07:55:46.151649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3288023 ] 00:20:01.667 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.667 [2024-07-15 07:55:46.219854] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.667 [2024-07-15 07:55:46.298627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.927 [2024-07-15 07:55:46.441138] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:01.927 [2024-07-15 07:55:46.441235] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:02.495 07:55:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:02.495 07:55:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:02.495 07:55:46 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:02.495 Running I/O for 10 seconds... 00:20:12.472 00:20:12.472 Latency(us) 00:20:12.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.472 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:12.472 Verification LBA range: start 0x0 length 0x2000 00:20:12.472 TLSTESTn1 : 10.01 5384.24 21.03 0.00 0.00 23737.13 5442.34 23365.01 00:20:12.472 =================================================================================================================== 00:20:12.472 Total : 5384.24 21.03 0.00 0.00 23737.13 5442.34 23365.01 00:20:12.472 0 00:20:12.472 07:55:57 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:12.472 07:55:57 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3288023 00:20:12.472 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3288023 ']' 00:20:12.472 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3288023 00:20:12.472 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:12.472 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:12.472 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3288023 00:20:12.472 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:12.473 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:12.473 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3288023' 00:20:12.473 killing process with pid 3288023 00:20:12.473 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3288023 00:20:12.473 Received shutdown signal, test time was about 10.000000 seconds 00:20:12.473 00:20:12.473 Latency(us) 00:20:12.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.473 =================================================================================================================== 00:20:12.473 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:12.473 [2024-07-15 07:55:57.151100] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:12.473 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3288023 00:20:12.731 07:55:57 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3287833 00:20:12.731 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3287833 ']' 00:20:12.731 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3287833 00:20:12.731 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:12.731 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:12.731 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3287833 00:20:12.731 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:12.731 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:12.731 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3287833' 00:20:12.731 killing process with pid 3287833 00:20:12.731 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3287833 00:20:12.731 [2024-07-15 07:55:57.376319] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:12.731 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3287833 00:20:12.990 07:55:57 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:12.990 07:55:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:12.990 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:12.990 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.990 07:55:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3289905 00:20:12.990 07:55:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3289905 00:20:12.990 07:55:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:12.990 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3289905 ']' 00:20:12.990 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.990 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:12.990 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.990 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:12.990 07:55:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.990 [2024-07-15 07:55:57.620054] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:20:12.990 [2024-07-15 07:55:57.620098] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.990 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.990 [2024-07-15 07:55:57.687829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.248 [2024-07-15 07:55:57.756696] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.248 [2024-07-15 07:55:57.756735] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.248 [2024-07-15 07:55:57.756742] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.248 [2024-07-15 07:55:57.756748] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.248 [2024-07-15 07:55:57.756752] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.248 [2024-07-15 07:55:57.756786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.816 07:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.816 07:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:13.816 07:55:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:13.816 07:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:13.816 07:55:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.816 07:55:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.816 07:55:58 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.i8dR3033D9 00:20:13.816 07:55:58 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.i8dR3033D9 00:20:13.816 07:55:58 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:14.075 [2024-07-15 07:55:58.624207] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.075 07:55:58 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:14.075 07:55:58 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:14.334 [2024-07-15 07:55:58.961075] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:14.334 [2024-07-15 07:55:58.961269] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.334 07:55:58 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:14.593 malloc0 00:20:14.593 07:55:59 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:14.593 07:55:59 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.i8dR3033D9 00:20:14.852 [2024-07-15 07:55:59.470704] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:14.852 07:55:59 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:14.852 07:55:59 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3290178 00:20:14.852 07:55:59 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:14.852 07:55:59 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3290178 /var/tmp/bdevperf.sock 00:20:14.852 07:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3290178 ']' 00:20:14.852 07:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:14.852 07:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:14.852 07:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:14.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:14.852 07:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:14.852 07:55:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.852 [2024-07-15 07:55:59.512766] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:20:14.852 [2024-07-15 07:55:59.512813] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290178 ] 00:20:14.852 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.852 [2024-07-15 07:55:59.578310] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.111 [2024-07-15 07:55:59.650619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.678 07:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:15.678 07:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:15.678 07:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.i8dR3033D9 00:20:15.937 07:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:15.937 [2024-07-15 07:56:00.654162] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:16.196 nvme0n1 00:20:16.196 07:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:16.196 Running I/O for 1 seconds... 00:20:17.133 00:20:17.133 Latency(us) 00:20:17.133 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.133 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:17.133 Verification LBA range: start 0x0 length 0x2000 00:20:17.133 nvme0n1 : 1.02 5338.21 20.85 0.00 0.00 23763.32 4986.43 31457.28 00:20:17.133 =================================================================================================================== 00:20:17.133 Total : 5338.21 20.85 0.00 0.00 23763.32 4986.43 31457.28 00:20:17.133 0 00:20:17.133 07:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3290178 00:20:17.133 07:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3290178 ']' 00:20:17.133 07:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3290178 00:20:17.133 07:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:17.133 07:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:17.133 07:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3290178 00:20:17.391 07:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:17.391 07:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:17.391 07:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3290178' 00:20:17.391 killing process with pid 3290178 00:20:17.391 07:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3290178 00:20:17.391 Received shutdown signal, test time was about 1.000000 seconds 00:20:17.391 00:20:17.391 Latency(us) 00:20:17.391 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.391 =================================================================================================================== 00:20:17.391 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:17.391 07:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3290178 00:20:17.391 07:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3289905 00:20:17.391 07:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3289905 ']' 00:20:17.391 07:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3289905 00:20:17.391 07:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:17.391 07:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:17.391 07:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3289905 00:20:17.391 07:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:17.391 07:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:17.391 07:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3289905' 00:20:17.391 killing process with pid 3289905 00:20:17.391 07:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3289905 00:20:17.391 [2024-07-15 07:56:02.137501] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:17.391 07:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3289905 00:20:17.650 07:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:20:17.650 07:56:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:17.650 07:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:17.650 07:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.650 07:56:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3290656 00:20:17.650 07:56:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3290656 00:20:17.650 07:56:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:17.650 07:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3290656 ']' 00:20:17.650 07:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.650 07:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:17.650 07:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.650 07:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:17.650 07:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.650 [2024-07-15 07:56:02.381651] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:20:17.650 [2024-07-15 07:56:02.381697] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.909 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.909 [2024-07-15 07:56:02.451331] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.909 [2024-07-15 07:56:02.529387] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.909 [2024-07-15 07:56:02.529420] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.909 [2024-07-15 07:56:02.529427] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.909 [2024-07-15 07:56:02.529434] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.909 [2024-07-15 07:56:02.529439] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.909 [2024-07-15 07:56:02.529456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.509 07:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:18.509 07:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:18.509 07:56:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:18.509 07:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:18.509 07:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.509 07:56:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.509 07:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:20:18.509 07:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.509 07:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.509 [2024-07-15 07:56:03.224257] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.509 malloc0 00:20:18.509 [2024-07-15 07:56:03.252608] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:18.509 [2024-07-15 07:56:03.252786] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.768 07:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.768 07:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3290903 00:20:18.768 07:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3290903 /var/tmp/bdevperf.sock 00:20:18.768 07:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:18.768 07:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3290903 ']' 00:20:18.768 07:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.768 07:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.768 07:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.768 07:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.768 07:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.768 [2024-07-15 07:56:03.327280] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:20:18.768 [2024-07-15 07:56:03.327318] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290903 ] 00:20:18.768 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.768 [2024-07-15 07:56:03.395388] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.768 [2024-07-15 07:56:03.468994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.703 07:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:19.703 07:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:19.703 07:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.i8dR3033D9 00:20:19.703 07:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:19.962 [2024-07-15 07:56:04.461433] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.962 nvme0n1 00:20:19.962 07:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:19.962 Running I/O for 1 seconds... 00:20:21.340 00:20:21.340 Latency(us) 00:20:21.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.340 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:21.340 Verification LBA range: start 0x0 length 0x2000 00:20:21.340 nvme0n1 : 1.01 5086.34 19.87 0.00 0.00 24984.56 5841.25 42170.99 00:20:21.340 =================================================================================================================== 00:20:21.340 Total : 5086.34 19.87 0.00 0.00 24984.56 5841.25 42170.99 00:20:21.340 0 00:20:21.340 07:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:20:21.340 07:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.340 07:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.340 07:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.340 07:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:20:21.340 "subsystems": [ 00:20:21.340 { 00:20:21.340 "subsystem": "keyring", 00:20:21.340 "config": [ 00:20:21.340 { 00:20:21.340 "method": "keyring_file_add_key", 00:20:21.340 "params": { 00:20:21.340 "name": "key0", 00:20:21.340 "path": "/tmp/tmp.i8dR3033D9" 00:20:21.340 } 00:20:21.340 } 00:20:21.340 ] 00:20:21.340 }, 00:20:21.340 { 00:20:21.340 "subsystem": "iobuf", 00:20:21.340 "config": [ 00:20:21.340 { 00:20:21.340 "method": "iobuf_set_options", 00:20:21.340 "params": { 00:20:21.340 "small_pool_count": 8192, 00:20:21.340 "large_pool_count": 1024, 00:20:21.340 "small_bufsize": 8192, 00:20:21.340 "large_bufsize": 135168 00:20:21.340 } 00:20:21.340 } 00:20:21.340 ] 00:20:21.340 }, 00:20:21.340 { 00:20:21.340 "subsystem": "sock", 00:20:21.340 "config": [ 00:20:21.340 { 00:20:21.340 "method": "sock_set_default_impl", 00:20:21.340 "params": { 00:20:21.340 "impl_name": "posix" 00:20:21.340 } 00:20:21.340 }, 00:20:21.340 { 00:20:21.340 "method": "sock_impl_set_options", 00:20:21.340 "params": { 00:20:21.340 "impl_name": "ssl", 00:20:21.340 "recv_buf_size": 4096, 00:20:21.340 "send_buf_size": 4096, 00:20:21.340 "enable_recv_pipe": true, 00:20:21.340 "enable_quickack": false, 00:20:21.340 "enable_placement_id": 0, 00:20:21.340 "enable_zerocopy_send_server": true, 00:20:21.340 "enable_zerocopy_send_client": false, 00:20:21.340 "zerocopy_threshold": 0, 00:20:21.340 "tls_version": 0, 00:20:21.340 "enable_ktls": false 00:20:21.340 } 00:20:21.340 }, 00:20:21.340 { 00:20:21.340 "method": "sock_impl_set_options", 00:20:21.340 "params": { 00:20:21.340 "impl_name": "posix", 00:20:21.340 "recv_buf_size": 2097152, 00:20:21.340 "send_buf_size": 2097152, 00:20:21.340 "enable_recv_pipe": true, 00:20:21.340 "enable_quickack": false, 00:20:21.340 "enable_placement_id": 0, 00:20:21.340 "enable_zerocopy_send_server": true, 00:20:21.340 "enable_zerocopy_send_client": false, 00:20:21.340 "zerocopy_threshold": 0, 00:20:21.340 "tls_version": 0, 00:20:21.340 "enable_ktls": false 00:20:21.340 } 00:20:21.340 } 00:20:21.340 ] 00:20:21.340 }, 00:20:21.340 { 00:20:21.340 "subsystem": "vmd", 00:20:21.340 "config": [] 00:20:21.340 }, 00:20:21.340 { 00:20:21.340 "subsystem": "accel", 00:20:21.340 "config": [ 00:20:21.340 { 00:20:21.340 "method": "accel_set_options", 00:20:21.340 "params": { 00:20:21.340 "small_cache_size": 128, 00:20:21.340 "large_cache_size": 16, 00:20:21.340 "task_count": 2048, 00:20:21.340 "sequence_count": 2048, 00:20:21.340 "buf_count": 2048 00:20:21.340 } 00:20:21.340 } 00:20:21.340 ] 00:20:21.340 }, 00:20:21.340 { 00:20:21.340 "subsystem": "bdev", 00:20:21.340 "config": [ 00:20:21.340 { 00:20:21.340 "method": "bdev_set_options", 00:20:21.340 "params": { 00:20:21.340 "bdev_io_pool_size": 65535, 00:20:21.340 "bdev_io_cache_size": 256, 00:20:21.340 "bdev_auto_examine": true, 00:20:21.340 "iobuf_small_cache_size": 128, 00:20:21.340 "iobuf_large_cache_size": 16 00:20:21.340 } 00:20:21.340 }, 00:20:21.340 { 00:20:21.340 "method": "bdev_raid_set_options", 00:20:21.340 "params": { 00:20:21.340 "process_window_size_kb": 1024 00:20:21.340 } 00:20:21.340 }, 00:20:21.340 { 00:20:21.340 "method": "bdev_iscsi_set_options", 00:20:21.340 "params": { 00:20:21.340 "timeout_sec": 30 00:20:21.340 } 00:20:21.340 }, 00:20:21.340 { 00:20:21.340 "method": "bdev_nvme_set_options", 00:20:21.340 "params": { 00:20:21.340 "action_on_timeout": "none", 00:20:21.340 "timeout_us": 0, 00:20:21.340 "timeout_admin_us": 0, 00:20:21.340 "keep_alive_timeout_ms": 10000, 00:20:21.340 "arbitration_burst": 0, 00:20:21.340 "low_priority_weight": 0, 00:20:21.340 "medium_priority_weight": 0, 00:20:21.340 "high_priority_weight": 0, 00:20:21.340 "nvme_adminq_poll_period_us": 10000, 00:20:21.340 "nvme_ioq_poll_period_us": 0, 00:20:21.340 "io_queue_requests": 0, 00:20:21.340 "delay_cmd_submit": true, 00:20:21.340 "transport_retry_count": 4, 00:20:21.340 "bdev_retry_count": 3, 00:20:21.340 "transport_ack_timeout": 0, 00:20:21.340 "ctrlr_loss_timeout_sec": 0, 00:20:21.340 "reconnect_delay_sec": 0, 00:20:21.340 "fast_io_fail_timeout_sec": 0, 00:20:21.340 "disable_auto_failback": false, 00:20:21.340 "generate_uuids": false, 00:20:21.340 "transport_tos": 0, 00:20:21.340 "nvme_error_stat": false, 00:20:21.340 "rdma_srq_size": 0, 00:20:21.340 "io_path_stat": false, 00:20:21.340 "allow_accel_sequence": false, 00:20:21.340 "rdma_max_cq_size": 0, 00:20:21.340 "rdma_cm_event_timeout_ms": 0, 00:20:21.340 "dhchap_digests": [ 00:20:21.340 "sha256", 00:20:21.340 "sha384", 00:20:21.340 "sha512" 00:20:21.340 ], 00:20:21.340 "dhchap_dhgroups": [ 00:20:21.340 "null", 00:20:21.340 "ffdhe2048", 00:20:21.340 "ffdhe3072", 00:20:21.340 "ffdhe4096", 00:20:21.340 "ffdhe6144", 00:20:21.340 "ffdhe8192" 00:20:21.340 ] 00:20:21.340 } 00:20:21.340 }, 00:20:21.340 { 00:20:21.340 "method": "bdev_nvme_set_hotplug", 00:20:21.340 "params": { 00:20:21.340 "period_us": 100000, 00:20:21.340 "enable": false 00:20:21.340 } 00:20:21.340 }, 00:20:21.340 { 00:20:21.340 "method": "bdev_malloc_create", 00:20:21.341 "params": { 00:20:21.341 "name": "malloc0", 00:20:21.341 "num_blocks": 8192, 00:20:21.341 "block_size": 4096, 00:20:21.341 "physical_block_size": 4096, 00:20:21.341 "uuid": "6da05d93-e162-4b04-84c4-f52fc624d6b9", 00:20:21.341 "optimal_io_boundary": 0 00:20:21.341 } 00:20:21.341 }, 00:20:21.341 { 00:20:21.341 "method": "bdev_wait_for_examine" 00:20:21.341 } 00:20:21.341 ] 00:20:21.341 }, 00:20:21.341 { 00:20:21.341 "subsystem": "nbd", 00:20:21.341 "config": [] 00:20:21.341 }, 00:20:21.341 { 00:20:21.341 "subsystem": "scheduler", 00:20:21.341 "config": [ 00:20:21.341 { 00:20:21.341 "method": "framework_set_scheduler", 00:20:21.341 "params": { 00:20:21.341 "name": "static" 00:20:21.341 } 00:20:21.341 } 00:20:21.341 ] 00:20:21.341 }, 00:20:21.341 { 00:20:21.341 "subsystem": "nvmf", 00:20:21.341 "config": [ 00:20:21.341 { 00:20:21.341 "method": "nvmf_set_config", 00:20:21.341 "params": { 00:20:21.341 "discovery_filter": "match_any", 00:20:21.341 "admin_cmd_passthru": { 00:20:21.341 "identify_ctrlr": false 00:20:21.341 } 00:20:21.341 } 00:20:21.341 }, 00:20:21.341 { 00:20:21.341 "method": "nvmf_set_max_subsystems", 00:20:21.341 "params": { 00:20:21.341 "max_subsystems": 1024 00:20:21.341 } 00:20:21.341 }, 00:20:21.341 { 00:20:21.341 "method": "nvmf_set_crdt", 00:20:21.341 "params": { 00:20:21.341 "crdt1": 0, 00:20:21.341 "crdt2": 0, 00:20:21.341 "crdt3": 0 00:20:21.341 } 00:20:21.341 }, 00:20:21.341 { 00:20:21.341 "method": "nvmf_create_transport", 00:20:21.341 "params": { 00:20:21.341 "trtype": "TCP", 00:20:21.341 "max_queue_depth": 128, 00:20:21.341 "max_io_qpairs_per_ctrlr": 127, 00:20:21.341 "in_capsule_data_size": 4096, 00:20:21.341 "max_io_size": 131072, 00:20:21.341 "io_unit_size": 131072, 00:20:21.341 "max_aq_depth": 128, 00:20:21.341 "num_shared_buffers": 511, 00:20:21.341 "buf_cache_size": 4294967295, 00:20:21.341 "dif_insert_or_strip": false, 00:20:21.341 "zcopy": false, 00:20:21.341 "c2h_success": false, 00:20:21.341 "sock_priority": 0, 00:20:21.341 "abort_timeout_sec": 1, 00:20:21.341 "ack_timeout": 0, 00:20:21.341 "data_wr_pool_size": 0 00:20:21.341 } 00:20:21.341 }, 00:20:21.341 { 00:20:21.341 "method": "nvmf_create_subsystem", 00:20:21.341 "params": { 00:20:21.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.341 "allow_any_host": false, 00:20:21.341 "serial_number": "00000000000000000000", 00:20:21.341 "model_number": "SPDK bdev Controller", 00:20:21.341 "max_namespaces": 32, 00:20:21.341 "min_cntlid": 1, 00:20:21.341 "max_cntlid": 65519, 00:20:21.341 "ana_reporting": false 00:20:21.341 } 00:20:21.341 }, 00:20:21.341 { 00:20:21.341 "method": "nvmf_subsystem_add_host", 00:20:21.341 "params": { 00:20:21.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.341 "host": "nqn.2016-06.io.spdk:host1", 00:20:21.341 "psk": "key0" 00:20:21.341 } 00:20:21.341 }, 00:20:21.341 { 00:20:21.341 "method": "nvmf_subsystem_add_ns", 00:20:21.341 "params": { 00:20:21.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.341 "namespace": { 00:20:21.341 "nsid": 1, 00:20:21.341 "bdev_name": "malloc0", 00:20:21.341 "nguid": "6DA05D93E1624B0484C4F52FC624D6B9", 00:20:21.341 "uuid": "6da05d93-e162-4b04-84c4-f52fc624d6b9", 00:20:21.341 "no_auto_visible": false 00:20:21.341 } 00:20:21.341 } 00:20:21.341 }, 00:20:21.341 { 00:20:21.341 "method": "nvmf_subsystem_add_listener", 00:20:21.341 "params": { 00:20:21.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.341 "listen_address": { 00:20:21.341 "trtype": "TCP", 00:20:21.341 "adrfam": "IPv4", 00:20:21.341 "traddr": "10.0.0.2", 00:20:21.341 "trsvcid": "4420" 00:20:21.341 }, 00:20:21.341 "secure_channel": true 00:20:21.341 } 00:20:21.341 } 00:20:21.341 ] 00:20:21.341 } 00:20:21.341 ] 00:20:21.341 }' 00:20:21.341 07:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:21.341 07:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:20:21.341 "subsystems": [ 00:20:21.341 { 00:20:21.341 "subsystem": "keyring", 00:20:21.341 "config": [ 00:20:21.341 { 00:20:21.341 "method": "keyring_file_add_key", 00:20:21.341 "params": { 00:20:21.341 "name": "key0", 00:20:21.341 "path": "/tmp/tmp.i8dR3033D9" 00:20:21.341 } 00:20:21.341 } 00:20:21.341 ] 00:20:21.341 }, 00:20:21.341 { 00:20:21.341 "subsystem": "iobuf", 00:20:21.341 "config": [ 00:20:21.341 { 00:20:21.341 "method": "iobuf_set_options", 00:20:21.341 "params": { 00:20:21.341 "small_pool_count": 8192, 00:20:21.341 "large_pool_count": 1024, 00:20:21.341 "small_bufsize": 8192, 00:20:21.341 "large_bufsize": 135168 00:20:21.341 } 00:20:21.341 } 00:20:21.341 ] 00:20:21.341 }, 00:20:21.341 { 00:20:21.341 "subsystem": "sock", 00:20:21.341 "config": [ 00:20:21.341 { 00:20:21.341 "method": "sock_set_default_impl", 00:20:21.341 "params": { 00:20:21.341 "impl_name": "posix" 00:20:21.341 } 00:20:21.341 }, 00:20:21.341 { 00:20:21.341 "method": "sock_impl_set_options", 00:20:21.341 "params": { 00:20:21.341 "impl_name": "ssl", 00:20:21.341 "recv_buf_size": 4096, 00:20:21.341 "send_buf_size": 4096, 00:20:21.341 "enable_recv_pipe": true, 00:20:21.341 "enable_quickack": false, 00:20:21.341 "enable_placement_id": 0, 00:20:21.341 "enable_zerocopy_send_server": true, 00:20:21.341 "enable_zerocopy_send_client": false, 00:20:21.341 "zerocopy_threshold": 0, 00:20:21.341 "tls_version": 0, 00:20:21.341 "enable_ktls": false 00:20:21.341 } 00:20:21.341 }, 00:20:21.341 { 00:20:21.341 "method": "sock_impl_set_options", 00:20:21.341 "params": { 00:20:21.341 "impl_name": "posix", 00:20:21.341 "recv_buf_size": 2097152, 00:20:21.341 "send_buf_size": 2097152, 00:20:21.341 "enable_recv_pipe": true, 00:20:21.341 "enable_quickack": false, 00:20:21.341 "enable_placement_id": 0, 00:20:21.341 "enable_zerocopy_send_server": true, 00:20:21.341 "enable_zerocopy_send_client": false, 00:20:21.341 "zerocopy_threshold": 0, 00:20:21.341 "tls_version": 0, 00:20:21.341 "enable_ktls": false 00:20:21.341 } 00:20:21.341 } 00:20:21.341 ] 00:20:21.341 }, 00:20:21.341 { 00:20:21.341 "subsystem": "vmd", 00:20:21.341 "config": [] 00:20:21.341 }, 00:20:21.341 { 00:20:21.341 "subsystem": "accel", 00:20:21.341 "config": [ 00:20:21.341 { 00:20:21.341 "method": "accel_set_options", 00:20:21.341 "params": { 00:20:21.341 "small_cache_size": 128, 00:20:21.341 "large_cache_size": 16, 00:20:21.341 "task_count": 2048, 00:20:21.341 "sequence_count": 2048, 00:20:21.341 "buf_count": 2048 00:20:21.341 } 00:20:21.341 } 00:20:21.341 ] 00:20:21.341 }, 00:20:21.341 { 00:20:21.341 "subsystem": "bdev", 00:20:21.341 "config": [ 00:20:21.341 { 00:20:21.341 "method": "bdev_set_options", 00:20:21.341 "params": { 00:20:21.341 "bdev_io_pool_size": 65535, 00:20:21.341 "bdev_io_cache_size": 256, 00:20:21.341 "bdev_auto_examine": true, 00:20:21.341 "iobuf_small_cache_size": 128, 00:20:21.341 "iobuf_large_cache_size": 16 00:20:21.341 } 00:20:21.341 }, 00:20:21.341 { 00:20:21.341 "method": "bdev_raid_set_options", 00:20:21.341 "params": { 00:20:21.341 "process_window_size_kb": 1024 00:20:21.341 } 00:20:21.341 }, 00:20:21.341 { 00:20:21.341 "method": "bdev_iscsi_set_options", 00:20:21.341 "params": { 00:20:21.341 "timeout_sec": 30 00:20:21.341 } 00:20:21.341 }, 00:20:21.341 { 00:20:21.341 "method": "bdev_nvme_set_options", 00:20:21.341 "params": { 00:20:21.341 "action_on_timeout": "none", 00:20:21.341 "timeout_us": 0, 00:20:21.341 "timeout_admin_us": 0, 00:20:21.341 "keep_alive_timeout_ms": 10000, 00:20:21.341 "arbitration_burst": 0, 00:20:21.341 "low_priority_weight": 0, 00:20:21.341 "medium_priority_weight": 0, 00:20:21.341 "high_priority_weight": 0, 00:20:21.341 "nvme_adminq_poll_period_us": 10000, 00:20:21.341 "nvme_ioq_poll_period_us": 0, 00:20:21.341 "io_queue_requests": 512, 00:20:21.341 "delay_cmd_submit": true, 00:20:21.341 "transport_retry_count": 4, 00:20:21.341 "bdev_retry_count": 3, 00:20:21.341 "transport_ack_timeout": 0, 00:20:21.341 "ctrlr_loss_timeout_sec": 0, 00:20:21.341 "reconnect_delay_sec": 0, 00:20:21.341 "fast_io_fail_timeout_sec": 0, 00:20:21.341 "disable_auto_failback": false, 00:20:21.341 "generate_uuids": false, 00:20:21.341 "transport_tos": 0, 00:20:21.341 "nvme_error_stat": false, 00:20:21.341 "rdma_srq_size": 0, 00:20:21.341 "io_path_stat": false, 00:20:21.342 "allow_accel_sequence": false, 00:20:21.342 "rdma_max_cq_size": 0, 00:20:21.342 "rdma_cm_event_timeout_ms": 0, 00:20:21.342 "dhchap_digests": [ 00:20:21.342 "sha256", 00:20:21.342 "sha384", 00:20:21.342 "sha512" 00:20:21.342 ], 00:20:21.342 "dhchap_dhgroups": [ 00:20:21.342 "null", 00:20:21.342 "ffdhe2048", 00:20:21.342 "ffdhe3072", 00:20:21.342 "ffdhe4096", 00:20:21.342 "ffdhe6144", 00:20:21.342 "ffdhe8192" 00:20:21.342 ] 00:20:21.342 } 00:20:21.342 }, 00:20:21.342 { 00:20:21.342 "method": "bdev_nvme_attach_controller", 00:20:21.342 "params": { 00:20:21.342 "name": "nvme0", 00:20:21.342 "trtype": "TCP", 00:20:21.342 "adrfam": "IPv4", 00:20:21.342 "traddr": "10.0.0.2", 00:20:21.342 "trsvcid": "4420", 00:20:21.342 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.342 "prchk_reftag": false, 00:20:21.342 "prchk_guard": false, 00:20:21.342 "ctrlr_loss_timeout_sec": 0, 00:20:21.342 "reconnect_delay_sec": 0, 00:20:21.342 "fast_io_fail_timeout_sec": 0, 00:20:21.342 "psk": "key0", 00:20:21.342 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:21.342 "hdgst": false, 00:20:21.342 "ddgst": false 00:20:21.342 } 00:20:21.342 }, 00:20:21.342 { 00:20:21.342 "method": "bdev_nvme_set_hotplug", 00:20:21.342 "params": { 00:20:21.342 "period_us": 100000, 00:20:21.342 "enable": false 00:20:21.342 } 00:20:21.342 }, 00:20:21.342 { 00:20:21.342 "method": "bdev_enable_histogram", 00:20:21.342 "params": { 00:20:21.342 "name": "nvme0n1", 00:20:21.342 "enable": true 00:20:21.342 } 00:20:21.342 }, 00:20:21.342 { 00:20:21.342 "method": "bdev_wait_for_examine" 00:20:21.342 } 00:20:21.342 ] 00:20:21.342 }, 00:20:21.342 { 00:20:21.342 "subsystem": "nbd", 00:20:21.342 "config": [] 00:20:21.342 } 00:20:21.342 ] 00:20:21.342 }' 00:20:21.342 07:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3290903 00:20:21.342 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3290903 ']' 00:20:21.342 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3290903 00:20:21.342 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:21.342 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:21.342 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3290903 00:20:21.342 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:21.342 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:21.342 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3290903' 00:20:21.342 killing process with pid 3290903 00:20:21.342 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3290903 00:20:21.342 Received shutdown signal, test time was about 1.000000 seconds 00:20:21.342 00:20:21.342 Latency(us) 00:20:21.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.342 =================================================================================================================== 00:20:21.342 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:21.342 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3290903 00:20:21.600 07:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3290656 00:20:21.600 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3290656 ']' 00:20:21.600 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3290656 00:20:21.600 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:21.600 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:21.600 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3290656 00:20:21.600 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:21.600 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:21.600 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3290656' 00:20:21.600 killing process with pid 3290656 00:20:21.600 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3290656 00:20:21.600 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3290656 00:20:21.859 07:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:20:21.859 07:56:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:21.859 07:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:20:21.859 "subsystems": [ 00:20:21.859 { 00:20:21.859 "subsystem": "keyring", 00:20:21.859 "config": [ 00:20:21.859 { 00:20:21.859 "method": "keyring_file_add_key", 00:20:21.859 "params": { 00:20:21.860 "name": "key0", 00:20:21.860 "path": "/tmp/tmp.i8dR3033D9" 00:20:21.860 } 00:20:21.860 } 00:20:21.860 ] 00:20:21.860 }, 00:20:21.860 { 00:20:21.860 "subsystem": "iobuf", 00:20:21.860 "config": [ 00:20:21.860 { 00:20:21.860 "method": "iobuf_set_options", 00:20:21.860 "params": { 00:20:21.860 "small_pool_count": 8192, 00:20:21.860 "large_pool_count": 1024, 00:20:21.860 "small_bufsize": 8192, 00:20:21.860 "large_bufsize": 135168 00:20:21.860 } 00:20:21.860 } 00:20:21.860 ] 00:20:21.860 }, 00:20:21.860 { 00:20:21.860 "subsystem": "sock", 00:20:21.860 "config": [ 00:20:21.860 { 00:20:21.860 "method": "sock_set_default_impl", 00:20:21.860 "params": { 00:20:21.860 "impl_name": "posix" 00:20:21.860 } 00:20:21.860 }, 00:20:21.860 { 00:20:21.860 "method": "sock_impl_set_options", 00:20:21.860 "params": { 00:20:21.860 "impl_name": "ssl", 00:20:21.860 "recv_buf_size": 4096, 00:20:21.860 "send_buf_size": 4096, 00:20:21.860 "enable_recv_pipe": true, 00:20:21.860 "enable_quickack": false, 00:20:21.860 "enable_placement_id": 0, 00:20:21.860 "enable_zerocopy_send_server": true, 00:20:21.860 "enable_zerocopy_send_client": false, 00:20:21.860 "zerocopy_threshold": 0, 00:20:21.860 "tls_version": 0, 00:20:21.860 "enable_ktls": false 00:20:21.860 } 00:20:21.860 }, 00:20:21.860 { 00:20:21.860 "method": "sock_impl_set_options", 00:20:21.860 "params": { 00:20:21.860 "impl_name": "posix", 00:20:21.860 "recv_buf_size": 2097152, 00:20:21.860 "send_buf_size": 2097152, 00:20:21.860 "enable_recv_pipe": true, 00:20:21.860 "enable_quickack": false, 00:20:21.860 "enable_placement_id": 0, 00:20:21.860 "enable_zerocopy_send_server": true, 00:20:21.860 "enable_zerocopy_send_client": false, 00:20:21.860 "zerocopy_threshold": 0, 00:20:21.860 "tls_version": 0, 00:20:21.860 "enable_ktls": false 00:20:21.860 } 00:20:21.860 } 00:20:21.860 ] 00:20:21.860 }, 00:20:21.860 { 00:20:21.860 "subsystem": "vmd", 00:20:21.860 "config": [] 00:20:21.860 }, 00:20:21.860 { 00:20:21.860 "subsystem": "accel", 00:20:21.860 "config": [ 00:20:21.860 { 00:20:21.860 "method": "accel_set_options", 00:20:21.860 "params": { 00:20:21.860 "small_cache_size": 128, 00:20:21.860 "large_cache_size": 16, 00:20:21.860 "task_count": 2048, 00:20:21.860 "sequence_count": 2048, 00:20:21.860 "buf_count": 2048 00:20:21.860 } 00:20:21.860 } 00:20:21.860 ] 00:20:21.860 }, 00:20:21.860 { 00:20:21.860 "subsystem": "bdev", 00:20:21.860 "config": [ 00:20:21.860 { 00:20:21.860 "method": "bdev_set_options", 00:20:21.860 "params": { 00:20:21.860 "bdev_io_pool_size": 65535, 00:20:21.860 "bdev_io_cache_size": 256, 00:20:21.860 "bdev_auto_examine": true, 00:20:21.860 "iobuf_small_cache_size": 128, 00:20:21.860 "iobuf_large_cache_size": 16 00:20:21.860 } 00:20:21.860 }, 00:20:21.860 { 00:20:21.860 "method": "bdev_raid_set_options", 00:20:21.860 "params": { 00:20:21.860 "process_window_size_kb": 1024 00:20:21.860 } 00:20:21.860 }, 00:20:21.860 { 00:20:21.860 "method": "bdev_iscsi_set_options", 00:20:21.860 "params": { 00:20:21.860 "timeout_sec": 30 00:20:21.860 } 00:20:21.860 }, 00:20:21.860 { 00:20:21.860 "method": "bdev_nvme_set_options", 00:20:21.860 "params": { 00:20:21.860 "action_on_timeout": "none", 00:20:21.860 "timeout_us": 0, 00:20:21.860 "timeout_admin_us": 0, 00:20:21.860 "keep_alive_timeout_ms": 10000, 00:20:21.860 "arbitration_burst": 0, 00:20:21.860 "low_priority_weight": 0, 00:20:21.860 "medium_priority_weight": 0, 00:20:21.860 "high_priority_weight": 0, 00:20:21.860 "nvme_adminq_poll_period_us": 10000, 00:20:21.860 "nvme_ioq_poll_period_us": 0, 00:20:21.860 "io_queue_requests": 0, 00:20:21.860 "delay_cmd_submit": true, 00:20:21.860 "transport_retry_count": 4, 00:20:21.860 "bdev_retry_count": 3, 00:20:21.860 "transport_ack_timeout": 0, 00:20:21.860 "ctrlr_loss_timeout_sec": 0, 00:20:21.860 "reconnect_delay_sec": 0, 00:20:21.860 "fast_io_fail_timeout_sec": 0, 00:20:21.860 "disable_auto_failback": false, 00:20:21.860 "generate_uuids": false, 00:20:21.860 "transport_tos": 0, 00:20:21.860 "nvme_error_stat": false, 00:20:21.860 "rdma_srq_size": 0, 00:20:21.860 "io_path_stat": false, 00:20:21.860 "allow_accel_sequence": false, 00:20:21.860 "rdma_max_cq_size": 0, 00:20:21.860 "rdma_cm_event_timeout_ms": 0, 00:20:21.860 "dhchap_digests": [ 00:20:21.860 "sha256", 00:20:21.860 "sha384", 00:20:21.860 "sha512" 00:20:21.860 ], 00:20:21.860 "dhchap_dhgroups": [ 00:20:21.860 "null", 00:20:21.860 "ffdhe2048", 00:20:21.860 "ffdhe3072", 00:20:21.860 "ffdhe4096", 00:20:21.860 "ffdhe6144", 00:20:21.860 "ffdhe8192" 00:20:21.860 ] 00:20:21.860 } 00:20:21.860 }, 00:20:21.860 { 00:20:21.860 "method": "bdev_nvme_set_hotplug", 00:20:21.860 "params": { 00:20:21.860 "period_us": 100000, 00:20:21.860 "enable": false 00:20:21.860 } 00:20:21.860 }, 00:20:21.860 { 00:20:21.860 "method": "bdev_malloc_create", 00:20:21.860 "params": { 00:20:21.860 "name": "malloc0", 00:20:21.860 "num_blocks": 8192, 00:20:21.860 "block_size": 4096, 00:20:21.860 "physical_block_size": 4096, 00:20:21.860 "uuid": "6da05d93-e162-4b04-84c4-f52fc624d6b9", 00:20:21.860 "optimal_io_boundary": 0 00:20:21.860 } 00:20:21.860 }, 00:20:21.860 { 00:20:21.860 "method": "bdev_wait_for_examine" 00:20:21.860 } 00:20:21.860 ] 00:20:21.860 }, 00:20:21.860 { 00:20:21.860 "subsystem": "nbd", 00:20:21.860 "config": [] 00:20:21.860 }, 00:20:21.860 { 00:20:21.860 "subsystem": "scheduler", 00:20:21.860 "config": [ 00:20:21.860 { 00:20:21.860 "method": "framework_set_scheduler", 00:20:21.860 "params": { 00:20:21.860 "name": "static" 00:20:21.860 } 00:20:21.860 } 00:20:21.860 ] 00:20:21.860 }, 00:20:21.860 { 00:20:21.860 "subsystem": "nvmf", 00:20:21.860 "config": [ 00:20:21.860 { 00:20:21.860 "method": "nvmf_set_config", 00:20:21.860 "params": { 00:20:21.860 "discovery_filter": "match_any", 00:20:21.860 "admin_cmd_passthru": { 00:20:21.860 "identify_ctrlr": false 00:20:21.860 } 00:20:21.860 } 00:20:21.860 }, 00:20:21.860 { 00:20:21.860 "method": "nvmf_set_max_subsystems", 00:20:21.860 "params": { 00:20:21.860 "max_subsystems": 1024 00:20:21.860 } 00:20:21.860 }, 00:20:21.861 { 00:20:21.861 "method": "nvmf_set_crdt", 00:20:21.861 "params": { 00:20:21.861 "crdt1": 0, 00:20:21.861 "crdt2": 0, 00:20:21.861 "crdt3": 0 00:20:21.861 } 00:20:21.861 }, 00:20:21.861 { 00:20:21.861 "method": "nvmf_create_transport", 00:20:21.861 "params": { 00:20:21.861 "trtype": "TCP", 00:20:21.861 "max_queue_depth": 128, 00:20:21.861 "max_io_qpairs_per_ctrlr": 127, 00:20:21.861 "in_capsule_data_size": 4096, 00:20:21.861 "max_io_size": 131072, 00:20:21.861 "io_unit_size": 131072, 00:20:21.861 "max_aq_depth": 128, 00:20:21.861 "num_shared_buffers": 511, 00:20:21.861 "buf_cache_size": 4294967295, 00:20:21.861 "dif_insert_or_strip": false, 00:20:21.861 "zcopy": false, 00:20:21.861 "c2h_success": false, 00:20:21.861 "sock_priority": 0, 00:20:21.861 "abort_timeout_sec": 1, 00:20:21.861 "ack_timeout": 0, 00:20:21.861 "data_wr_pool_size": 0 00:20:21.861 } 00:20:21.861 }, 00:20:21.861 { 00:20:21.861 "method": "nvmf_create_subsystem", 00:20:21.861 "params": { 00:20:21.861 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.861 "allow_any_host": false, 00:20:21.861 "serial_number": "00000000000000000000", 00:20:21.861 "model_number": "SPDK bdev Controller", 00:20:21.861 "max_namespaces": 32, 00:20:21.861 "min_cntlid": 1, 00:20:21.861 "max_cntlid": 65519, 00:20:21.861 "ana_reporting": false 00:20:21.861 } 00:20:21.861 }, 00:20:21.861 { 00:20:21.861 "method": "nvmf_subsystem_add_host", 00:20:21.861 "params": { 00:20:21.861 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.861 "host": "nqn.2016-06.io.spdk:host1", 00:20:21.861 "psk": "key0" 00:20:21.861 } 00:20:21.861 }, 00:20:21.861 { 00:20:21.861 "method": "nvmf_subsystem_add_ns", 00:20:21.861 "params": { 00:20:21.861 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.861 "namespace": { 00:20:21.861 "nsid": 1, 00:20:21.861 "bdev_name": "malloc0", 00:20:21.861 "nguid": "6DA05D93E1624B0484C4F52FC624D6B9", 00:20:21.861 "uuid": "6da05d93-e162-4b04-84c4-f52fc624d6b9", 00:20:21.861 "no_auto_visible": false 00:20:21.861 } 00:20:21.861 } 00:20:21.861 }, 00:20:21.861 { 00:20:21.861 "method": "nvmf_subsystem_add_listener", 00:20:21.861 "params": { 00:20:21.861 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.861 "listen_address": { 00:20:21.861 "trtype": "TCP", 00:20:21.861 "adrfam": "IPv4", 00:20:21.861 "traddr": "10.0.0.2", 00:20:21.861 "trsvcid": "4420" 00:20:21.861 }, 00:20:21.861 "secure_channel": true 00:20:21.861 } 00:20:21.861 } 00:20:21.861 ] 00:20:21.861 } 00:20:21.861 ] 00:20:21.861 }' 00:20:21.861 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:21.861 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.861 07:56:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3291386 00:20:21.861 07:56:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3291386 00:20:21.861 07:56:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:21.861 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3291386 ']' 00:20:21.861 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.861 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:21.861 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.861 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:21.861 07:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.861 [2024-07-15 07:56:06.545700] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:20:21.861 [2024-07-15 07:56:06.545747] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.861 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.120 [2024-07-15 07:56:06.613592] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.120 [2024-07-15 07:56:06.691940] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.120 [2024-07-15 07:56:06.691974] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.120 [2024-07-15 07:56:06.691981] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.120 [2024-07-15 07:56:06.691987] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.120 [2024-07-15 07:56:06.691992] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.120 [2024-07-15 07:56:06.692057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.379 [2024-07-15 07:56:06.903395] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.379 [2024-07-15 07:56:06.935424] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:22.379 [2024-07-15 07:56:06.943558] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.638 07:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:22.638 07:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:22.638 07:56:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:22.638 07:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:22.638 07:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.638 07:56:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.638 07:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3291586 00:20:22.638 07:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3291586 /var/tmp/bdevperf.sock 00:20:22.638 07:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3291586 ']' 00:20:22.638 07:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:22.638 07:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:22.638 07:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:22.638 07:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:22.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:22.638 07:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:22.638 07:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:20:22.638 "subsystems": [ 00:20:22.638 { 00:20:22.638 "subsystem": "keyring", 00:20:22.638 "config": [ 00:20:22.638 { 00:20:22.638 "method": "keyring_file_add_key", 00:20:22.638 "params": { 00:20:22.638 "name": "key0", 00:20:22.638 "path": "/tmp/tmp.i8dR3033D9" 00:20:22.638 } 00:20:22.638 } 00:20:22.638 ] 00:20:22.638 }, 00:20:22.638 { 00:20:22.638 "subsystem": "iobuf", 00:20:22.638 "config": [ 00:20:22.638 { 00:20:22.638 "method": "iobuf_set_options", 00:20:22.638 "params": { 00:20:22.638 "small_pool_count": 8192, 00:20:22.638 "large_pool_count": 1024, 00:20:22.638 "small_bufsize": 8192, 00:20:22.638 "large_bufsize": 135168 00:20:22.638 } 00:20:22.638 } 00:20:22.638 ] 00:20:22.638 }, 00:20:22.638 { 00:20:22.638 "subsystem": "sock", 00:20:22.638 "config": [ 00:20:22.638 { 00:20:22.638 "method": "sock_set_default_impl", 00:20:22.638 "params": { 00:20:22.638 "impl_name": "posix" 00:20:22.638 } 00:20:22.638 }, 00:20:22.638 { 00:20:22.638 "method": "sock_impl_set_options", 00:20:22.638 "params": { 00:20:22.638 "impl_name": "ssl", 00:20:22.638 "recv_buf_size": 4096, 00:20:22.638 "send_buf_size": 4096, 00:20:22.638 "enable_recv_pipe": true, 00:20:22.638 "enable_quickack": false, 00:20:22.638 "enable_placement_id": 0, 00:20:22.638 "enable_zerocopy_send_server": true, 00:20:22.638 "enable_zerocopy_send_client": false, 00:20:22.638 "zerocopy_threshold": 0, 00:20:22.638 "tls_version": 0, 00:20:22.638 "enable_ktls": false 00:20:22.638 } 00:20:22.638 }, 00:20:22.638 { 00:20:22.638 "method": "sock_impl_set_options", 00:20:22.638 "params": { 00:20:22.638 "impl_name": "posix", 00:20:22.638 "recv_buf_size": 2097152, 00:20:22.638 "send_buf_size": 2097152, 00:20:22.638 "enable_recv_pipe": true, 00:20:22.638 "enable_quickack": false, 00:20:22.638 "enable_placement_id": 0, 00:20:22.638 "enable_zerocopy_send_server": true, 00:20:22.638 "enable_zerocopy_send_client": false, 00:20:22.638 "zerocopy_threshold": 0, 00:20:22.638 "tls_version": 0, 00:20:22.638 "enable_ktls": false 00:20:22.638 } 00:20:22.638 } 00:20:22.638 ] 00:20:22.638 }, 00:20:22.638 { 00:20:22.638 "subsystem": "vmd", 00:20:22.638 "config": [] 00:20:22.639 }, 00:20:22.639 { 00:20:22.639 "subsystem": "accel", 00:20:22.639 "config": [ 00:20:22.639 { 00:20:22.639 "method": "accel_set_options", 00:20:22.639 "params": { 00:20:22.639 "small_cache_size": 128, 00:20:22.639 "large_cache_size": 16, 00:20:22.639 "task_count": 2048, 00:20:22.639 "sequence_count": 2048, 00:20:22.639 "buf_count": 2048 00:20:22.639 } 00:20:22.639 } 00:20:22.639 ] 00:20:22.639 }, 00:20:22.639 { 00:20:22.639 "subsystem": "bdev", 00:20:22.639 "config": [ 00:20:22.639 { 00:20:22.639 "method": "bdev_set_options", 00:20:22.639 "params": { 00:20:22.639 "bdev_io_pool_size": 65535, 00:20:22.639 "bdev_io_cache_size": 256, 00:20:22.639 "bdev_auto_examine": true, 00:20:22.639 "iobuf_small_cache_size": 128, 00:20:22.639 "iobuf_large_cache_size": 16 00:20:22.639 } 00:20:22.639 }, 00:20:22.639 { 00:20:22.639 "method": "bdev_raid_set_options", 00:20:22.639 "params": { 00:20:22.639 "process_window_size_kb": 1024 00:20:22.639 } 00:20:22.639 }, 00:20:22.639 { 00:20:22.639 "method": "bdev_iscsi_set_options", 00:20:22.639 "params": { 00:20:22.639 "timeout_sec": 30 00:20:22.639 } 00:20:22.639 }, 00:20:22.639 { 00:20:22.639 "method": "bdev_nvme_set_options", 00:20:22.639 "params": { 00:20:22.639 "action_on_timeout": "none", 00:20:22.639 "timeout_us": 0, 00:20:22.639 "timeout_admin_us": 0, 00:20:22.639 "keep_alive_timeout_ms": 10000, 00:20:22.639 "arbitration_burst": 0, 00:20:22.639 "low_priority_weight": 0, 00:20:22.639 "medium_priority_weight": 0, 00:20:22.639 "high_priority_weight": 0, 00:20:22.639 "nvme_adminq_poll_period_us": 10000, 00:20:22.639 "nvme_ioq_poll_period_us": 0, 00:20:22.639 "io_queue_requests": 512, 00:20:22.639 "delay_cmd_submit": true, 00:20:22.639 "transport_retry_count": 4, 00:20:22.639 "bdev_retry_count": 3, 00:20:22.639 "transport_ack_timeout": 0, 00:20:22.639 "ctrlr_loss_timeout_sec": 0, 00:20:22.639 "reconnect_delay_sec": 0, 00:20:22.639 "fast_io_fail_timeout_sec": 0, 00:20:22.639 "disable_auto_failback": false, 00:20:22.639 "generate_uuids": false, 00:20:22.639 "transport_tos": 0, 00:20:22.639 "nvme_error_stat": false, 00:20:22.639 "rdma_srq_size": 0, 00:20:22.639 "io_path_stat": false, 00:20:22.639 "allow_accel_sequence": false, 00:20:22.639 "rdma_max_cq_size": 0, 00:20:22.639 "rdma_cm_event_timeout_ms": 0, 00:20:22.639 "dhchap_digests": [ 00:20:22.639 "sha256", 00:20:22.639 "sha384", 00:20:22.639 "sha512" 00:20:22.639 ], 00:20:22.639 "dhchap_dhgroups": [ 00:20:22.639 "null", 00:20:22.639 "ffdhe2048", 00:20:22.639 "ffdhe3072", 00:20:22.639 "ffdhe4096", 00:20:22.639 "ffdhe6144", 00:20:22.639 "ffdhe8192" 00:20:22.639 ] 00:20:22.639 } 00:20:22.639 }, 00:20:22.639 { 00:20:22.639 "method": "bdev_nvme_attach_controller", 00:20:22.639 "params": { 00:20:22.639 "name": "nvme0", 00:20:22.639 "trtype": "TCP", 00:20:22.639 "adrfam": "IPv4", 00:20:22.639 "traddr": "10.0.0.2", 00:20:22.639 "trsvcid": "4420", 00:20:22.639 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.639 "prchk_reftag": false, 00:20:22.639 "prchk_guard": false, 00:20:22.639 "ctrlr_loss_timeout_sec": 0, 00:20:22.639 "reconnect_delay_sec": 0, 00:20:22.639 "fast_io_fail_timeout_sec": 0, 00:20:22.639 "psk": "key0", 00:20:22.639 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:22.639 "hdgst": false, 00:20:22.639 "ddgst": false 00:20:22.639 } 00:20:22.639 }, 00:20:22.639 { 00:20:22.639 "method": "bdev_nvme_set_hotplug", 00:20:22.639 "params": { 00:20:22.639 "period_us": 100000, 00:20:22.639 "enable": false 00:20:22.639 } 00:20:22.639 }, 00:20:22.639 { 00:20:22.639 "method": "bdev_enable_histogram", 00:20:22.639 "params": { 00:20:22.639 "name": "nvme0n1", 00:20:22.639 "enable": true 00:20:22.639 } 00:20:22.639 }, 00:20:22.639 { 00:20:22.639 "method": "bdev_wait_for_examine" 00:20:22.639 } 00:20:22.639 ] 00:20:22.639 }, 00:20:22.639 { 00:20:22.639 "subsystem": "nbd", 00:20:22.639 "config": [] 00:20:22.639 } 00:20:22.639 ] 00:20:22.639 }' 00:20:22.639 07:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.898 [2024-07-15 07:56:07.432097] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:20:22.898 [2024-07-15 07:56:07.432143] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291586 ] 00:20:22.898 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.898 [2024-07-15 07:56:07.500128] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.898 [2024-07-15 07:56:07.579554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.157 [2024-07-15 07:56:07.729848] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.725 07:56:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:23.725 07:56:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:23.725 07:56:08 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:23.725 07:56:08 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:20:23.725 07:56:08 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.725 07:56:08 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:23.983 Running I/O for 1 seconds... 00:20:24.971 00:20:24.971 Latency(us) 00:20:24.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.971 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:24.971 Verification LBA range: start 0x0 length 0x2000 00:20:24.971 nvme0n1 : 1.01 5228.43 20.42 0.00 0.00 24302.45 4843.97 21655.37 00:20:24.971 =================================================================================================================== 00:20:24.971 Total : 5228.43 20.42 0.00 0.00 24302.45 4843.97 21655.37 00:20:24.971 0 00:20:24.971 07:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:20:24.971 07:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:20:24.971 07:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:24.971 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:20:24.971 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:20:24.971 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:24.971 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:24.971 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:24.971 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:24.971 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:24.971 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:24.971 nvmf_trace.0 00:20:24.971 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:20:24.971 07:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3291586 00:20:24.971 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3291586 ']' 00:20:24.971 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3291586 00:20:24.971 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:24.971 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:24.971 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3291586 00:20:24.971 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:24.971 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:24.971 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3291586' 00:20:24.971 killing process with pid 3291586 00:20:24.971 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3291586 00:20:24.971 Received shutdown signal, test time was about 1.000000 seconds 00:20:24.971 00:20:24.971 Latency(us) 00:20:24.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.971 =================================================================================================================== 00:20:24.971 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:24.971 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3291586 00:20:25.230 07:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:25.230 07:56:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:25.230 07:56:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:25.230 07:56:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:25.230 07:56:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:25.230 07:56:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:25.230 07:56:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:25.230 rmmod nvme_tcp 00:20:25.230 rmmod nvme_fabrics 00:20:25.230 rmmod nvme_keyring 00:20:25.230 07:56:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:25.230 07:56:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:25.230 07:56:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:25.230 07:56:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3291386 ']' 00:20:25.230 07:56:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3291386 00:20:25.230 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3291386 ']' 00:20:25.230 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3291386 00:20:25.230 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:25.230 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:25.230 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3291386 00:20:25.230 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:25.230 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:25.230 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3291386' 00:20:25.230 killing process with pid 3291386 00:20:25.230 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3291386 00:20:25.230 07:56:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3291386 00:20:25.488 07:56:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:25.488 07:56:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:25.488 07:56:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:25.488 07:56:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:25.488 07:56:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:25.488 07:56:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.488 07:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:25.488 07:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.024 07:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:28.024 07:56:12 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.3RX0f3c7G4 /tmp/tmp.vJNOwzqedd /tmp/tmp.i8dR3033D9 00:20:28.024 00:20:28.024 real 1m25.494s 00:20:28.024 user 2m11.917s 00:20:28.024 sys 0m29.223s 00:20:28.024 07:56:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:28.024 07:56:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.024 ************************************ 00:20:28.024 END TEST nvmf_tls 00:20:28.024 ************************************ 00:20:28.024 07:56:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:28.024 07:56:12 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:28.024 07:56:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:28.024 07:56:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:28.024 07:56:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:28.024 ************************************ 00:20:28.024 START TEST nvmf_fips 00:20:28.024 ************************************ 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:28.024 * Looking for test storage... 00:20:28.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:28.024 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:20:28.025 Error setting digest 00:20:28.025 00B26ED1DB7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:28.025 00B26ED1DB7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:20:28.025 07:56:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:34.609 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:34.609 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:20:34.609 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:34.609 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:34.609 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:34.609 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:34.609 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:34.609 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:20:34.609 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:34.609 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:20:34.609 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:34.610 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:34.610 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:34.610 Found net devices under 0000:86:00.0: cvl_0_0 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:34.610 Found net devices under 0000:86:00.1: cvl_0_1 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:34.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:20:34.610 00:20:34.610 --- 10.0.0.2 ping statistics --- 00:20:34.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.610 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:34.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:20:34.610 00:20:34.610 --- 10.0.0.1 ping statistics --- 00:20:34.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.610 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3295428 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3295428 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3295428 ']' 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:34.610 07:56:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:34.610 [2024-07-15 07:56:18.434364] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:20:34.610 [2024-07-15 07:56:18.434411] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.610 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.610 [2024-07-15 07:56:18.505421] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.610 [2024-07-15 07:56:18.582486] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.611 [2024-07-15 07:56:18.582524] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.611 [2024-07-15 07:56:18.582531] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.611 [2024-07-15 07:56:18.582536] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.611 [2024-07-15 07:56:18.582545] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.611 [2024-07-15 07:56:18.582580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.611 07:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:34.611 07:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:34.611 07:56:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:34.611 07:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:34.611 07:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:34.611 07:56:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.611 07:56:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:34.611 07:56:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:34.611 07:56:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:34.611 07:56:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:34.611 07:56:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:34.611 07:56:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:34.611 07:56:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:34.611 07:56:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:34.870 [2024-07-15 07:56:19.414102] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.870 [2024-07-15 07:56:19.430106] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:34.870 [2024-07-15 07:56:19.430286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.870 [2024-07-15 07:56:19.458293] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:34.870 malloc0 00:20:34.870 07:56:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:34.870 07:56:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3295677 00:20:34.870 07:56:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:34.870 07:56:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3295677 /var/tmp/bdevperf.sock 00:20:34.870 07:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3295677 ']' 00:20:34.870 07:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.870 07:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:34.870 07:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.870 07:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:34.870 07:56:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:34.870 [2024-07-15 07:56:19.547570] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:20:34.870 [2024-07-15 07:56:19.547614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3295677 ] 00:20:34.870 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.870 [2024-07-15 07:56:19.612838] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.129 [2024-07-15 07:56:19.684871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.697 07:56:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:35.697 07:56:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:35.697 07:56:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:35.956 [2024-07-15 07:56:20.483519] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.956 [2024-07-15 07:56:20.483590] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:35.956 TLSTESTn1 00:20:35.956 07:56:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:35.956 Running I/O for 10 seconds... 00:20:45.984 00:20:45.984 Latency(us) 00:20:45.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.984 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:45.984 Verification LBA range: start 0x0 length 0x2000 00:20:45.984 TLSTESTn1 : 10.02 5445.48 21.27 0.00 0.00 23467.44 7294.44 25188.62 00:20:45.984 =================================================================================================================== 00:20:45.984 Total : 5445.48 21.27 0.00 0.00 23467.44 7294.44 25188.62 00:20:45.984 0 00:20:45.984 07:56:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:45.984 07:56:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:45.984 07:56:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:20:45.984 07:56:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:20:45.984 07:56:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:45.984 07:56:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:46.242 07:56:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:46.242 07:56:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:46.242 07:56:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:46.242 07:56:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:46.242 nvmf_trace.0 00:20:46.242 07:56:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:20:46.242 07:56:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3295677 00:20:46.242 07:56:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3295677 ']' 00:20:46.242 07:56:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3295677 00:20:46.242 07:56:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:46.243 07:56:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:46.243 07:56:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3295677 00:20:46.243 07:56:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:46.243 07:56:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:46.243 07:56:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3295677' 00:20:46.243 killing process with pid 3295677 00:20:46.243 07:56:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3295677 00:20:46.243 Received shutdown signal, test time was about 10.000000 seconds 00:20:46.243 00:20:46.243 Latency(us) 00:20:46.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.243 =================================================================================================================== 00:20:46.243 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:46.243 [2024-07-15 07:56:30.861636] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:46.243 07:56:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3295677 00:20:46.501 07:56:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:46.501 07:56:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:46.501 07:56:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:46.501 07:56:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:46.501 07:56:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:46.501 07:56:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:46.501 07:56:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:46.501 rmmod nvme_tcp 00:20:46.501 rmmod nvme_fabrics 00:20:46.501 rmmod nvme_keyring 00:20:46.501 07:56:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:46.501 07:56:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:46.501 07:56:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:46.501 07:56:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3295428 ']' 00:20:46.501 07:56:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3295428 00:20:46.501 07:56:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3295428 ']' 00:20:46.501 07:56:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3295428 00:20:46.501 07:56:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:46.501 07:56:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:46.501 07:56:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3295428 00:20:46.501 07:56:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:46.501 07:56:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:46.501 07:56:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3295428' 00:20:46.501 killing process with pid 3295428 00:20:46.502 07:56:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3295428 00:20:46.502 [2024-07-15 07:56:31.166731] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:46.502 07:56:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3295428 00:20:46.760 07:56:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:46.760 07:56:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:46.760 07:56:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:46.760 07:56:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:46.760 07:56:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:46.760 07:56:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.760 07:56:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:46.760 07:56:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.294 07:56:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:49.294 07:56:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:49.294 00:20:49.294 real 0m21.136s 00:20:49.294 user 0m22.837s 00:20:49.294 sys 0m9.154s 00:20:49.294 07:56:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:49.294 07:56:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:49.294 ************************************ 00:20:49.294 END TEST nvmf_fips 00:20:49.294 ************************************ 00:20:49.294 07:56:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:49.294 07:56:33 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:20:49.294 07:56:33 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:20:49.294 07:56:33 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:20:49.294 07:56:33 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:20:49.294 07:56:33 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:20:49.294 07:56:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:54.569 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:54.569 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.569 07:56:38 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:54.570 07:56:38 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.570 07:56:38 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:54.570 07:56:38 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:54.570 07:56:38 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.570 07:56:38 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:54.570 Found net devices under 0000:86:00.0: cvl_0_0 00:20:54.570 07:56:38 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.570 07:56:38 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:54.570 07:56:38 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.570 07:56:38 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:54.570 07:56:38 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.570 07:56:38 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:54.570 07:56:38 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:54.570 07:56:38 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.570 07:56:38 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:54.570 Found net devices under 0000:86:00.1: cvl_0_1 00:20:54.570 07:56:38 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.570 07:56:38 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:54.570 07:56:38 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:54.570 07:56:38 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:20:54.570 07:56:38 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:54.570 07:56:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:54.570 07:56:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:54.570 07:56:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:54.570 ************************************ 00:20:54.570 START TEST nvmf_perf_adq 00:20:54.570 ************************************ 00:20:54.570 07:56:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:54.570 * Looking for test storage... 00:20:54.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:54.570 07:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:59.844 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:59.844 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:59.844 Found net devices under 0000:86:00.0: cvl_0_0 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:59.844 Found net devices under 0000:86:00.1: cvl_0_1 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:20:59.844 07:56:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:00.780 07:56:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:03.311 07:56:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:08.587 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:08.587 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:08.587 Found net devices under 0000:86:00.0: cvl_0_0 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:08.587 Found net devices under 0000:86:00.1: cvl_0_1 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:08.587 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:08.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:21:08.588 00:21:08.588 --- 10.0.0.2 ping statistics --- 00:21:08.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.588 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:08.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:21:08.588 00:21:08.588 --- 10.0.0.1 ping statistics --- 00:21:08.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.588 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3305383 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3305383 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3305383 ']' 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:08.588 07:56:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.588 [2024-07-15 07:56:52.801091] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:21:08.588 [2024-07-15 07:56:52.801138] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.588 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.588 [2024-07-15 07:56:52.872377] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:08.588 [2024-07-15 07:56:52.952840] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.588 [2024-07-15 07:56:52.952875] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.588 [2024-07-15 07:56:52.952882] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.588 [2024-07-15 07:56:52.952888] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.588 [2024-07-15 07:56:52.952893] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.588 [2024-07-15 07:56:52.952937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.588 [2024-07-15 07:56:52.953059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.588 [2024-07-15 07:56:52.953568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.588 [2024-07-15 07:56:52.953568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:09.155 [2024-07-15 07:56:53.792886] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:09.155 Malloc1 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:09.155 [2024-07-15 07:56:53.840704] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3305628 00:21:09.155 07:56:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:21:09.156 07:56:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:09.156 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.109 07:56:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:21:11.109 07:56:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.109 07:56:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.368 07:56:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.368 07:56:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:21:11.368 "tick_rate": 2300000000, 00:21:11.368 "poll_groups": [ 00:21:11.368 { 00:21:11.368 "name": "nvmf_tgt_poll_group_000", 00:21:11.368 "admin_qpairs": 1, 00:21:11.368 "io_qpairs": 1, 00:21:11.368 "current_admin_qpairs": 1, 00:21:11.368 "current_io_qpairs": 1, 00:21:11.368 "pending_bdev_io": 0, 00:21:11.368 "completed_nvme_io": 20828, 00:21:11.368 "transports": [ 00:21:11.368 { 00:21:11.368 "trtype": "TCP" 00:21:11.368 } 00:21:11.368 ] 00:21:11.368 }, 00:21:11.368 { 00:21:11.368 "name": "nvmf_tgt_poll_group_001", 00:21:11.368 "admin_qpairs": 0, 00:21:11.368 "io_qpairs": 1, 00:21:11.368 "current_admin_qpairs": 0, 00:21:11.368 "current_io_qpairs": 1, 00:21:11.368 "pending_bdev_io": 0, 00:21:11.368 "completed_nvme_io": 21172, 00:21:11.368 "transports": [ 00:21:11.368 { 00:21:11.368 "trtype": "TCP" 00:21:11.368 } 00:21:11.368 ] 00:21:11.368 }, 00:21:11.368 { 00:21:11.368 "name": "nvmf_tgt_poll_group_002", 00:21:11.368 "admin_qpairs": 0, 00:21:11.368 "io_qpairs": 1, 00:21:11.368 "current_admin_qpairs": 0, 00:21:11.368 "current_io_qpairs": 1, 00:21:11.368 "pending_bdev_io": 0, 00:21:11.368 "completed_nvme_io": 20933, 00:21:11.368 "transports": [ 00:21:11.368 { 00:21:11.368 "trtype": "TCP" 00:21:11.368 } 00:21:11.368 ] 00:21:11.368 }, 00:21:11.368 { 00:21:11.368 "name": "nvmf_tgt_poll_group_003", 00:21:11.368 "admin_qpairs": 0, 00:21:11.368 "io_qpairs": 1, 00:21:11.368 "current_admin_qpairs": 0, 00:21:11.368 "current_io_qpairs": 1, 00:21:11.368 "pending_bdev_io": 0, 00:21:11.368 "completed_nvme_io": 20734, 00:21:11.368 "transports": [ 00:21:11.368 { 00:21:11.368 "trtype": "TCP" 00:21:11.368 } 00:21:11.368 ] 00:21:11.368 } 00:21:11.368 ] 00:21:11.368 }' 00:21:11.368 07:56:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:11.368 07:56:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:21:11.368 07:56:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:21:11.368 07:56:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:21:11.368 07:56:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3305628 00:21:19.511 Initializing NVMe Controllers 00:21:19.511 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:19.511 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:19.512 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:19.512 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:19.512 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:19.512 Initialization complete. Launching workers. 00:21:19.512 ======================================================== 00:21:19.512 Latency(us) 00:21:19.512 Device Information : IOPS MiB/s Average min max 00:21:19.512 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10980.50 42.89 5829.17 2939.41 8794.55 00:21:19.512 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11165.60 43.62 5732.02 1963.20 10354.41 00:21:19.512 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11112.40 43.41 5758.62 1751.39 11241.48 00:21:19.512 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11002.90 42.98 5816.68 1910.53 10713.49 00:21:19.512 ======================================================== 00:21:19.512 Total : 44261.39 172.90 5783.84 1751.39 11241.48 00:21:19.512 00:21:19.512 07:57:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:21:19.512 07:57:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:19.512 07:57:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:19.512 07:57:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:19.512 07:57:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:19.512 07:57:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:19.512 07:57:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:19.512 rmmod nvme_tcp 00:21:19.512 rmmod nvme_fabrics 00:21:19.512 rmmod nvme_keyring 00:21:19.512 07:57:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:19.512 07:57:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:19.512 07:57:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:19.512 07:57:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3305383 ']' 00:21:19.512 07:57:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3305383 00:21:19.512 07:57:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3305383 ']' 00:21:19.512 07:57:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3305383 00:21:19.512 07:57:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:21:19.512 07:57:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:19.512 07:57:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3305383 00:21:19.512 07:57:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:19.512 07:57:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:19.512 07:57:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3305383' 00:21:19.512 killing process with pid 3305383 00:21:19.512 07:57:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3305383 00:21:19.512 07:57:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3305383 00:21:19.771 07:57:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:19.771 07:57:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:19.771 07:57:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:19.771 07:57:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:19.771 07:57:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:19.771 07:57:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.771 07:57:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.771 07:57:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.675 07:57:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:21.675 07:57:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:21:21.675 07:57:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:23.052 07:57:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:24.957 07:57:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:30.231 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:30.231 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:30.231 Found net devices under 0000:86:00.0: cvl_0_0 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:30.231 Found net devices under 0000:86:00.1: cvl_0_1 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:30.231 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:30.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:30.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:21:30.232 00:21:30.232 --- 10.0.0.2 ping statistics --- 00:21:30.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.232 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:30.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:30.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:21:30.232 00:21:30.232 --- 10.0.0.1 ping statistics --- 00:21:30.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.232 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:30.232 net.core.busy_poll = 1 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:30.232 net.core.busy_read = 1 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:30.232 07:57:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:30.491 07:57:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:30.491 07:57:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:30.491 07:57:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:30.491 07:57:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:30.492 07:57:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3309922 00:21:30.492 07:57:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3309922 00:21:30.492 07:57:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:30.492 07:57:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3309922 ']' 00:21:30.492 07:57:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.492 07:57:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:30.492 07:57:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.492 07:57:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:30.492 07:57:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:30.492 [2024-07-15 07:57:15.063629] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:21:30.492 [2024-07-15 07:57:15.063679] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:30.492 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.492 [2024-07-15 07:57:15.132529] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:30.492 [2024-07-15 07:57:15.211182] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.492 [2024-07-15 07:57:15.211221] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.492 [2024-07-15 07:57:15.211234] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:30.492 [2024-07-15 07:57:15.211240] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:30.492 [2024-07-15 07:57:15.211245] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.492 [2024-07-15 07:57:15.211292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.492 [2024-07-15 07:57:15.211403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:30.492 [2024-07-15 07:57:15.211507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.492 [2024-07-15 07:57:15.211508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:31.429 07:57:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:31.429 07:57:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:21:31.429 07:57:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:31.429 07:57:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:31.430 07:57:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.430 07:57:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:31.430 07:57:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:21:31.430 07:57:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:31.430 07:57:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:31.430 07:57:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.430 07:57:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.430 07:57:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.430 07:57:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:31.430 07:57:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:31.430 07:57:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.430 07:57:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.430 07:57:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.430 07:57:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:31.430 07:57:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.430 07:57:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.430 07:57:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.430 07:57:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:31.430 07:57:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.430 07:57:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.430 [2024-07-15 07:57:16.056120] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:31.430 07:57:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.430 07:57:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:31.430 07:57:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.430 07:57:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.430 Malloc1 00:21:31.430 07:57:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.430 07:57:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:31.430 07:57:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.430 07:57:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.430 07:57:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.430 07:57:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:31.430 07:57:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.430 07:57:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.430 07:57:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.430 07:57:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:31.430 07:57:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.430 07:57:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.430 [2024-07-15 07:57:16.108148] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.430 07:57:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.430 07:57:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3310179 00:21:31.430 07:57:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:21:31.430 07:57:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:31.430 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.963 07:57:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:21:33.963 07:57:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.963 07:57:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:33.963 07:57:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.963 07:57:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:21:33.963 "tick_rate": 2300000000, 00:21:33.963 "poll_groups": [ 00:21:33.963 { 00:21:33.963 "name": "nvmf_tgt_poll_group_000", 00:21:33.963 "admin_qpairs": 1, 00:21:33.963 "io_qpairs": 3, 00:21:33.963 "current_admin_qpairs": 1, 00:21:33.963 "current_io_qpairs": 3, 00:21:33.963 "pending_bdev_io": 0, 00:21:33.963 "completed_nvme_io": 30201, 00:21:33.963 "transports": [ 00:21:33.963 { 00:21:33.963 "trtype": "TCP" 00:21:33.963 } 00:21:33.963 ] 00:21:33.963 }, 00:21:33.963 { 00:21:33.963 "name": "nvmf_tgt_poll_group_001", 00:21:33.963 "admin_qpairs": 0, 00:21:33.963 "io_qpairs": 1, 00:21:33.963 "current_admin_qpairs": 0, 00:21:33.963 "current_io_qpairs": 1, 00:21:33.963 "pending_bdev_io": 0, 00:21:33.963 "completed_nvme_io": 27979, 00:21:33.963 "transports": [ 00:21:33.963 { 00:21:33.963 "trtype": "TCP" 00:21:33.963 } 00:21:33.963 ] 00:21:33.963 }, 00:21:33.963 { 00:21:33.963 "name": "nvmf_tgt_poll_group_002", 00:21:33.963 "admin_qpairs": 0, 00:21:33.963 "io_qpairs": 0, 00:21:33.963 "current_admin_qpairs": 0, 00:21:33.963 "current_io_qpairs": 0, 00:21:33.963 "pending_bdev_io": 0, 00:21:33.963 "completed_nvme_io": 0, 00:21:33.963 "transports": [ 00:21:33.963 { 00:21:33.963 "trtype": "TCP" 00:21:33.963 } 00:21:33.963 ] 00:21:33.963 }, 00:21:33.963 { 00:21:33.963 "name": "nvmf_tgt_poll_group_003", 00:21:33.963 "admin_qpairs": 0, 00:21:33.963 "io_qpairs": 0, 00:21:33.963 "current_admin_qpairs": 0, 00:21:33.963 "current_io_qpairs": 0, 00:21:33.963 "pending_bdev_io": 0, 00:21:33.963 "completed_nvme_io": 0, 00:21:33.963 "transports": [ 00:21:33.963 { 00:21:33.963 "trtype": "TCP" 00:21:33.963 } 00:21:33.963 ] 00:21:33.963 } 00:21:33.963 ] 00:21:33.963 }' 00:21:33.963 07:57:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:33.963 07:57:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:21:33.963 07:57:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:21:33.964 07:57:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:21:33.964 07:57:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3310179 00:21:42.119 Initializing NVMe Controllers 00:21:42.119 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:42.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:42.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:42.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:42.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:42.119 Initialization complete. Launching workers. 00:21:42.119 ======================================================== 00:21:42.119 Latency(us) 00:21:42.119 Device Information : IOPS MiB/s Average min max 00:21:42.119 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5780.50 22.58 11078.77 1555.16 59787.60 00:21:42.119 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5001.40 19.54 12803.62 1902.75 59237.18 00:21:42.119 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 15077.60 58.90 4244.60 1317.14 7134.13 00:21:42.119 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5108.20 19.95 12535.65 1760.68 58620.36 00:21:42.119 ======================================================== 00:21:42.119 Total : 30967.69 120.97 8270.23 1317.14 59787.60 00:21:42.119 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:42.119 rmmod nvme_tcp 00:21:42.119 rmmod nvme_fabrics 00:21:42.119 rmmod nvme_keyring 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3309922 ']' 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3309922 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3309922 ']' 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3309922 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3309922 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3309922' 00:21:42.119 killing process with pid 3309922 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3309922 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3309922 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:42.119 07:57:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.410 07:57:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:45.410 07:57:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:45.410 00:21:45.410 real 0m50.731s 00:21:45.410 user 2m49.270s 00:21:45.410 sys 0m9.581s 00:21:45.410 07:57:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:45.410 07:57:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:45.410 ************************************ 00:21:45.410 END TEST nvmf_perf_adq 00:21:45.410 ************************************ 00:21:45.410 07:57:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:45.410 07:57:29 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:45.410 07:57:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:45.410 07:57:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:45.410 07:57:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:45.410 ************************************ 00:21:45.410 START TEST nvmf_shutdown 00:21:45.410 ************************************ 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:45.410 * Looking for test storage... 00:21:45.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:45.410 ************************************ 00:21:45.410 START TEST nvmf_shutdown_tc1 00:21:45.410 ************************************ 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:45.410 07:57:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:50.696 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:50.696 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:50.696 Found net devices under 0000:86:00.0: cvl_0_0 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:50.696 Found net devices under 0000:86:00.1: cvl_0_1 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:50.696 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:50.697 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:50.697 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:50.697 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:50.697 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:50.697 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:50.697 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:50.697 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:50.697 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:50.697 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:50.697 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:50.697 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:50.697 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:50.697 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:50.956 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:50.956 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:50.956 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:50.956 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:50.956 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:50.956 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:50.957 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:50.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:50.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:21:50.957 00:21:50.957 --- 10.0.0.2 ping statistics --- 00:21:50.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.957 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:21:50.957 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:50.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:50.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:21:50.957 00:21:50.957 --- 10.0.0.1 ping statistics --- 00:21:50.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.957 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:21:50.957 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:50.957 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:50.957 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:50.957 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:50.957 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:50.957 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:50.957 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:50.957 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:50.957 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:50.957 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:50.957 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:50.957 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:50.957 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:50.957 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3315560 00:21:50.957 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3315560 00:21:50.957 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:50.957 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3315560 ']' 00:21:50.957 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.957 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:50.957 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.957 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:50.957 07:57:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:51.216 [2024-07-15 07:57:35.747495] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:21:51.216 [2024-07-15 07:57:35.747542] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.216 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.216 [2024-07-15 07:57:35.818167] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:51.216 [2024-07-15 07:57:35.899152] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.216 [2024-07-15 07:57:35.899185] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.216 [2024-07-15 07:57:35.899192] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.216 [2024-07-15 07:57:35.899198] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.216 [2024-07-15 07:57:35.899203] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.216 [2024-07-15 07:57:35.899341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.216 [2024-07-15 07:57:35.899448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:51.216 [2024-07-15 07:57:35.899553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.216 [2024-07-15 07:57:35.899554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:52.155 [2024-07-15 07:57:36.607185] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.155 07:57:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:52.155 Malloc1 00:21:52.155 [2024-07-15 07:57:36.702911] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.155 Malloc2 00:21:52.155 Malloc3 00:21:52.155 Malloc4 00:21:52.155 Malloc5 00:21:52.155 Malloc6 00:21:52.415 Malloc7 00:21:52.415 Malloc8 00:21:52.415 Malloc9 00:21:52.415 Malloc10 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3315850 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3315850 /var/tmp/bdevperf.sock 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3315850 ']' 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:52.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.415 { 00:21:52.415 "params": { 00:21:52.415 "name": "Nvme$subsystem", 00:21:52.415 "trtype": "$TEST_TRANSPORT", 00:21:52.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.415 "adrfam": "ipv4", 00:21:52.415 "trsvcid": "$NVMF_PORT", 00:21:52.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.415 "hdgst": ${hdgst:-false}, 00:21:52.415 "ddgst": ${ddgst:-false} 00:21:52.415 }, 00:21:52.415 "method": "bdev_nvme_attach_controller" 00:21:52.415 } 00:21:52.415 EOF 00:21:52.415 )") 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.415 { 00:21:52.415 "params": { 00:21:52.415 "name": "Nvme$subsystem", 00:21:52.415 "trtype": "$TEST_TRANSPORT", 00:21:52.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.415 "adrfam": "ipv4", 00:21:52.415 "trsvcid": "$NVMF_PORT", 00:21:52.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.415 "hdgst": ${hdgst:-false}, 00:21:52.415 "ddgst": ${ddgst:-false} 00:21:52.415 }, 00:21:52.415 "method": "bdev_nvme_attach_controller" 00:21:52.415 } 00:21:52.415 EOF 00:21:52.415 )") 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.415 { 00:21:52.415 "params": { 00:21:52.415 "name": "Nvme$subsystem", 00:21:52.415 "trtype": "$TEST_TRANSPORT", 00:21:52.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.415 "adrfam": "ipv4", 00:21:52.415 "trsvcid": "$NVMF_PORT", 00:21:52.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.415 "hdgst": ${hdgst:-false}, 00:21:52.415 "ddgst": ${ddgst:-false} 00:21:52.415 }, 00:21:52.415 "method": "bdev_nvme_attach_controller" 00:21:52.415 } 00:21:52.415 EOF 00:21:52.415 )") 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.415 { 00:21:52.415 "params": { 00:21:52.415 "name": "Nvme$subsystem", 00:21:52.415 "trtype": "$TEST_TRANSPORT", 00:21:52.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.415 "adrfam": "ipv4", 00:21:52.415 "trsvcid": "$NVMF_PORT", 00:21:52.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.415 "hdgst": ${hdgst:-false}, 00:21:52.415 "ddgst": ${ddgst:-false} 00:21:52.415 }, 00:21:52.415 "method": "bdev_nvme_attach_controller" 00:21:52.415 } 00:21:52.415 EOF 00:21:52.415 )") 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.415 { 00:21:52.415 "params": { 00:21:52.415 "name": "Nvme$subsystem", 00:21:52.415 "trtype": "$TEST_TRANSPORT", 00:21:52.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.415 "adrfam": "ipv4", 00:21:52.415 "trsvcid": "$NVMF_PORT", 00:21:52.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.415 "hdgst": ${hdgst:-false}, 00:21:52.415 "ddgst": ${ddgst:-false} 00:21:52.415 }, 00:21:52.415 "method": "bdev_nvme_attach_controller" 00:21:52.415 } 00:21:52.415 EOF 00:21:52.415 )") 00:21:52.415 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:52.416 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.416 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.416 { 00:21:52.416 "params": { 00:21:52.416 "name": "Nvme$subsystem", 00:21:52.416 "trtype": "$TEST_TRANSPORT", 00:21:52.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.416 "adrfam": "ipv4", 00:21:52.416 "trsvcid": "$NVMF_PORT", 00:21:52.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.416 "hdgst": ${hdgst:-false}, 00:21:52.416 "ddgst": ${ddgst:-false} 00:21:52.416 }, 00:21:52.416 "method": "bdev_nvme_attach_controller" 00:21:52.416 } 00:21:52.416 EOF 00:21:52.416 )") 00:21:52.675 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:52.676 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.676 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.676 { 00:21:52.676 "params": { 00:21:52.676 "name": "Nvme$subsystem", 00:21:52.676 "trtype": "$TEST_TRANSPORT", 00:21:52.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.676 "adrfam": "ipv4", 00:21:52.676 "trsvcid": "$NVMF_PORT", 00:21:52.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.676 "hdgst": ${hdgst:-false}, 00:21:52.676 "ddgst": ${ddgst:-false} 00:21:52.676 }, 00:21:52.676 "method": "bdev_nvme_attach_controller" 00:21:52.676 } 00:21:52.676 EOF 00:21:52.676 )") 00:21:52.676 [2024-07-15 07:57:37.175191] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:21:52.676 [2024-07-15 07:57:37.175248] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:52.676 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:52.676 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.676 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.676 { 00:21:52.676 "params": { 00:21:52.676 "name": "Nvme$subsystem", 00:21:52.676 "trtype": "$TEST_TRANSPORT", 00:21:52.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.676 "adrfam": "ipv4", 00:21:52.676 "trsvcid": "$NVMF_PORT", 00:21:52.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.676 "hdgst": ${hdgst:-false}, 00:21:52.676 "ddgst": ${ddgst:-false} 00:21:52.676 }, 00:21:52.676 "method": "bdev_nvme_attach_controller" 00:21:52.676 } 00:21:52.676 EOF 00:21:52.676 )") 00:21:52.676 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:52.676 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.676 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.676 { 00:21:52.676 "params": { 00:21:52.676 "name": "Nvme$subsystem", 00:21:52.676 "trtype": "$TEST_TRANSPORT", 00:21:52.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.676 "adrfam": "ipv4", 00:21:52.676 "trsvcid": "$NVMF_PORT", 00:21:52.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.676 "hdgst": ${hdgst:-false}, 00:21:52.676 "ddgst": ${ddgst:-false} 00:21:52.676 }, 00:21:52.676 "method": "bdev_nvme_attach_controller" 00:21:52.676 } 00:21:52.676 EOF 00:21:52.676 )") 00:21:52.676 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:52.676 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:52.676 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:52.676 { 00:21:52.676 "params": { 00:21:52.676 "name": "Nvme$subsystem", 00:21:52.676 "trtype": "$TEST_TRANSPORT", 00:21:52.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.676 "adrfam": "ipv4", 00:21:52.676 "trsvcid": "$NVMF_PORT", 00:21:52.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.676 "hdgst": ${hdgst:-false}, 00:21:52.676 "ddgst": ${ddgst:-false} 00:21:52.676 }, 00:21:52.676 "method": "bdev_nvme_attach_controller" 00:21:52.676 } 00:21:52.676 EOF 00:21:52.676 )") 00:21:52.676 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:52.676 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.676 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:52.676 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:52.676 07:57:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:52.676 "params": { 00:21:52.676 "name": "Nvme1", 00:21:52.676 "trtype": "tcp", 00:21:52.676 "traddr": "10.0.0.2", 00:21:52.676 "adrfam": "ipv4", 00:21:52.676 "trsvcid": "4420", 00:21:52.676 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.676 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:52.676 "hdgst": false, 00:21:52.676 "ddgst": false 00:21:52.676 }, 00:21:52.676 "method": "bdev_nvme_attach_controller" 00:21:52.676 },{ 00:21:52.676 "params": { 00:21:52.676 "name": "Nvme2", 00:21:52.676 "trtype": "tcp", 00:21:52.676 "traddr": "10.0.0.2", 00:21:52.676 "adrfam": "ipv4", 00:21:52.676 "trsvcid": "4420", 00:21:52.676 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:52.676 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:52.676 "hdgst": false, 00:21:52.676 "ddgst": false 00:21:52.676 }, 00:21:52.676 "method": "bdev_nvme_attach_controller" 00:21:52.676 },{ 00:21:52.676 "params": { 00:21:52.676 "name": "Nvme3", 00:21:52.676 "trtype": "tcp", 00:21:52.676 "traddr": "10.0.0.2", 00:21:52.676 "adrfam": "ipv4", 00:21:52.676 "trsvcid": "4420", 00:21:52.676 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:52.676 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:52.676 "hdgst": false, 00:21:52.676 "ddgst": false 00:21:52.676 }, 00:21:52.676 "method": "bdev_nvme_attach_controller" 00:21:52.676 },{ 00:21:52.676 "params": { 00:21:52.676 "name": "Nvme4", 00:21:52.676 "trtype": "tcp", 00:21:52.676 "traddr": "10.0.0.2", 00:21:52.676 "adrfam": "ipv4", 00:21:52.676 "trsvcid": "4420", 00:21:52.676 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:52.676 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:52.676 "hdgst": false, 00:21:52.676 "ddgst": false 00:21:52.676 }, 00:21:52.676 "method": "bdev_nvme_attach_controller" 00:21:52.676 },{ 00:21:52.676 "params": { 00:21:52.676 "name": "Nvme5", 00:21:52.676 "trtype": "tcp", 00:21:52.676 "traddr": "10.0.0.2", 00:21:52.676 "adrfam": "ipv4", 00:21:52.676 "trsvcid": "4420", 00:21:52.676 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:52.676 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:52.676 "hdgst": false, 00:21:52.676 "ddgst": false 00:21:52.676 }, 00:21:52.676 "method": "bdev_nvme_attach_controller" 00:21:52.676 },{ 00:21:52.676 "params": { 00:21:52.676 "name": "Nvme6", 00:21:52.676 "trtype": "tcp", 00:21:52.676 "traddr": "10.0.0.2", 00:21:52.676 "adrfam": "ipv4", 00:21:52.676 "trsvcid": "4420", 00:21:52.676 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:52.676 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:52.676 "hdgst": false, 00:21:52.676 "ddgst": false 00:21:52.676 }, 00:21:52.676 "method": "bdev_nvme_attach_controller" 00:21:52.676 },{ 00:21:52.676 "params": { 00:21:52.676 "name": "Nvme7", 00:21:52.676 "trtype": "tcp", 00:21:52.676 "traddr": "10.0.0.2", 00:21:52.676 "adrfam": "ipv4", 00:21:52.676 "trsvcid": "4420", 00:21:52.676 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:52.676 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:52.676 "hdgst": false, 00:21:52.676 "ddgst": false 00:21:52.676 }, 00:21:52.676 "method": "bdev_nvme_attach_controller" 00:21:52.676 },{ 00:21:52.676 "params": { 00:21:52.676 "name": "Nvme8", 00:21:52.676 "trtype": "tcp", 00:21:52.676 "traddr": "10.0.0.2", 00:21:52.676 "adrfam": "ipv4", 00:21:52.676 "trsvcid": "4420", 00:21:52.676 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:52.676 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:52.676 "hdgst": false, 00:21:52.676 "ddgst": false 00:21:52.676 }, 00:21:52.676 "method": "bdev_nvme_attach_controller" 00:21:52.676 },{ 00:21:52.676 "params": { 00:21:52.676 "name": "Nvme9", 00:21:52.676 "trtype": "tcp", 00:21:52.676 "traddr": "10.0.0.2", 00:21:52.676 "adrfam": "ipv4", 00:21:52.676 "trsvcid": "4420", 00:21:52.676 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:52.676 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:52.676 "hdgst": false, 00:21:52.676 "ddgst": false 00:21:52.676 }, 00:21:52.676 "method": "bdev_nvme_attach_controller" 00:21:52.676 },{ 00:21:52.676 "params": { 00:21:52.676 "name": "Nvme10", 00:21:52.676 "trtype": "tcp", 00:21:52.676 "traddr": "10.0.0.2", 00:21:52.676 "adrfam": "ipv4", 00:21:52.676 "trsvcid": "4420", 00:21:52.676 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:52.676 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:52.677 "hdgst": false, 00:21:52.677 "ddgst": false 00:21:52.677 }, 00:21:52.677 "method": "bdev_nvme_attach_controller" 00:21:52.677 }' 00:21:52.677 [2024-07-15 07:57:37.244184] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.677 [2024-07-15 07:57:37.317959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.055 07:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:54.055 07:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:54.055 07:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:54.055 07:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.055 07:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:54.055 07:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.055 07:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3315850 00:21:54.055 07:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:54.055 07:57:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:54.992 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3315850 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:54.992 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3315560 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:54.993 { 00:21:54.993 "params": { 00:21:54.993 "name": "Nvme$subsystem", 00:21:54.993 "trtype": "$TEST_TRANSPORT", 00:21:54.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.993 "adrfam": "ipv4", 00:21:54.993 "trsvcid": "$NVMF_PORT", 00:21:54.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.993 "hdgst": ${hdgst:-false}, 00:21:54.993 "ddgst": ${ddgst:-false} 00:21:54.993 }, 00:21:54.993 "method": "bdev_nvme_attach_controller" 00:21:54.993 } 00:21:54.993 EOF 00:21:54.993 )") 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:54.993 { 00:21:54.993 "params": { 00:21:54.993 "name": "Nvme$subsystem", 00:21:54.993 "trtype": "$TEST_TRANSPORT", 00:21:54.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.993 "adrfam": "ipv4", 00:21:54.993 "trsvcid": "$NVMF_PORT", 00:21:54.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.993 "hdgst": ${hdgst:-false}, 00:21:54.993 "ddgst": ${ddgst:-false} 00:21:54.993 }, 00:21:54.993 "method": "bdev_nvme_attach_controller" 00:21:54.993 } 00:21:54.993 EOF 00:21:54.993 )") 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:54.993 { 00:21:54.993 "params": { 00:21:54.993 "name": "Nvme$subsystem", 00:21:54.993 "trtype": "$TEST_TRANSPORT", 00:21:54.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.993 "adrfam": "ipv4", 00:21:54.993 "trsvcid": "$NVMF_PORT", 00:21:54.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.993 "hdgst": ${hdgst:-false}, 00:21:54.993 "ddgst": ${ddgst:-false} 00:21:54.993 }, 00:21:54.993 "method": "bdev_nvme_attach_controller" 00:21:54.993 } 00:21:54.993 EOF 00:21:54.993 )") 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:54.993 { 00:21:54.993 "params": { 00:21:54.993 "name": "Nvme$subsystem", 00:21:54.993 "trtype": "$TEST_TRANSPORT", 00:21:54.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.993 "adrfam": "ipv4", 00:21:54.993 "trsvcid": "$NVMF_PORT", 00:21:54.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.993 "hdgst": ${hdgst:-false}, 00:21:54.993 "ddgst": ${ddgst:-false} 00:21:54.993 }, 00:21:54.993 "method": "bdev_nvme_attach_controller" 00:21:54.993 } 00:21:54.993 EOF 00:21:54.993 )") 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:54.993 { 00:21:54.993 "params": { 00:21:54.993 "name": "Nvme$subsystem", 00:21:54.993 "trtype": "$TEST_TRANSPORT", 00:21:54.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.993 "adrfam": "ipv4", 00:21:54.993 "trsvcid": "$NVMF_PORT", 00:21:54.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.993 "hdgst": ${hdgst:-false}, 00:21:54.993 "ddgst": ${ddgst:-false} 00:21:54.993 }, 00:21:54.993 "method": "bdev_nvme_attach_controller" 00:21:54.993 } 00:21:54.993 EOF 00:21:54.993 )") 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:54.993 { 00:21:54.993 "params": { 00:21:54.993 "name": "Nvme$subsystem", 00:21:54.993 "trtype": "$TEST_TRANSPORT", 00:21:54.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.993 "adrfam": "ipv4", 00:21:54.993 "trsvcid": "$NVMF_PORT", 00:21:54.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.993 "hdgst": ${hdgst:-false}, 00:21:54.993 "ddgst": ${ddgst:-false} 00:21:54.993 }, 00:21:54.993 "method": "bdev_nvme_attach_controller" 00:21:54.993 } 00:21:54.993 EOF 00:21:54.993 )") 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:54.993 { 00:21:54.993 "params": { 00:21:54.993 "name": "Nvme$subsystem", 00:21:54.993 "trtype": "$TEST_TRANSPORT", 00:21:54.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.993 "adrfam": "ipv4", 00:21:54.993 "trsvcid": "$NVMF_PORT", 00:21:54.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.993 "hdgst": ${hdgst:-false}, 00:21:54.993 "ddgst": ${ddgst:-false} 00:21:54.993 }, 00:21:54.993 "method": "bdev_nvme_attach_controller" 00:21:54.993 } 00:21:54.993 EOF 00:21:54.993 )") 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:54.993 [2024-07-15 07:57:39.643033] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:21:54.993 [2024-07-15 07:57:39.643082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3316173 ] 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:54.993 { 00:21:54.993 "params": { 00:21:54.993 "name": "Nvme$subsystem", 00:21:54.993 "trtype": "$TEST_TRANSPORT", 00:21:54.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.993 "adrfam": "ipv4", 00:21:54.993 "trsvcid": "$NVMF_PORT", 00:21:54.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.993 "hdgst": ${hdgst:-false}, 00:21:54.993 "ddgst": ${ddgst:-false} 00:21:54.993 }, 00:21:54.993 "method": "bdev_nvme_attach_controller" 00:21:54.993 } 00:21:54.993 EOF 00:21:54.993 )") 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:54.993 { 00:21:54.993 "params": { 00:21:54.993 "name": "Nvme$subsystem", 00:21:54.993 "trtype": "$TEST_TRANSPORT", 00:21:54.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.993 "adrfam": "ipv4", 00:21:54.993 "trsvcid": "$NVMF_PORT", 00:21:54.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.993 "hdgst": ${hdgst:-false}, 00:21:54.993 "ddgst": ${ddgst:-false} 00:21:54.993 }, 00:21:54.993 "method": "bdev_nvme_attach_controller" 00:21:54.993 } 00:21:54.993 EOF 00:21:54.993 )") 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:54.993 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:54.993 { 00:21:54.993 "params": { 00:21:54.993 "name": "Nvme$subsystem", 00:21:54.993 "trtype": "$TEST_TRANSPORT", 00:21:54.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:54.993 "adrfam": "ipv4", 00:21:54.993 "trsvcid": "$NVMF_PORT", 00:21:54.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:54.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:54.993 "hdgst": ${hdgst:-false}, 00:21:54.994 "ddgst": ${ddgst:-false} 00:21:54.994 }, 00:21:54.994 "method": "bdev_nvme_attach_controller" 00:21:54.994 } 00:21:54.994 EOF 00:21:54.994 )") 00:21:54.994 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:54.994 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:54.994 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.994 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:54.994 07:57:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:54.994 "params": { 00:21:54.994 "name": "Nvme1", 00:21:54.994 "trtype": "tcp", 00:21:54.994 "traddr": "10.0.0.2", 00:21:54.994 "adrfam": "ipv4", 00:21:54.994 "trsvcid": "4420", 00:21:54.994 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:54.994 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:54.994 "hdgst": false, 00:21:54.994 "ddgst": false 00:21:54.994 }, 00:21:54.994 "method": "bdev_nvme_attach_controller" 00:21:54.994 },{ 00:21:54.994 "params": { 00:21:54.994 "name": "Nvme2", 00:21:54.994 "trtype": "tcp", 00:21:54.994 "traddr": "10.0.0.2", 00:21:54.994 "adrfam": "ipv4", 00:21:54.994 "trsvcid": "4420", 00:21:54.994 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:54.994 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:54.994 "hdgst": false, 00:21:54.994 "ddgst": false 00:21:54.994 }, 00:21:54.994 "method": "bdev_nvme_attach_controller" 00:21:54.994 },{ 00:21:54.994 "params": { 00:21:54.994 "name": "Nvme3", 00:21:54.994 "trtype": "tcp", 00:21:54.994 "traddr": "10.0.0.2", 00:21:54.994 "adrfam": "ipv4", 00:21:54.994 "trsvcid": "4420", 00:21:54.994 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:54.994 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:54.994 "hdgst": false, 00:21:54.994 "ddgst": false 00:21:54.994 }, 00:21:54.994 "method": "bdev_nvme_attach_controller" 00:21:54.994 },{ 00:21:54.994 "params": { 00:21:54.994 "name": "Nvme4", 00:21:54.994 "trtype": "tcp", 00:21:54.994 "traddr": "10.0.0.2", 00:21:54.994 "adrfam": "ipv4", 00:21:54.994 "trsvcid": "4420", 00:21:54.994 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:54.994 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:54.994 "hdgst": false, 00:21:54.994 "ddgst": false 00:21:54.994 }, 00:21:54.994 "method": "bdev_nvme_attach_controller" 00:21:54.994 },{ 00:21:54.994 "params": { 00:21:54.994 "name": "Nvme5", 00:21:54.994 "trtype": "tcp", 00:21:54.994 "traddr": "10.0.0.2", 00:21:54.994 "adrfam": "ipv4", 00:21:54.994 "trsvcid": "4420", 00:21:54.994 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:54.994 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:54.994 "hdgst": false, 00:21:54.994 "ddgst": false 00:21:54.994 }, 00:21:54.994 "method": "bdev_nvme_attach_controller" 00:21:54.994 },{ 00:21:54.994 "params": { 00:21:54.994 "name": "Nvme6", 00:21:54.994 "trtype": "tcp", 00:21:54.994 "traddr": "10.0.0.2", 00:21:54.994 "adrfam": "ipv4", 00:21:54.994 "trsvcid": "4420", 00:21:54.994 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:54.994 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:54.994 "hdgst": false, 00:21:54.994 "ddgst": false 00:21:54.994 }, 00:21:54.994 "method": "bdev_nvme_attach_controller" 00:21:54.994 },{ 00:21:54.994 "params": { 00:21:54.994 "name": "Nvme7", 00:21:54.994 "trtype": "tcp", 00:21:54.994 "traddr": "10.0.0.2", 00:21:54.994 "adrfam": "ipv4", 00:21:54.994 "trsvcid": "4420", 00:21:54.994 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:54.994 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:54.994 "hdgst": false, 00:21:54.994 "ddgst": false 00:21:54.994 }, 00:21:54.994 "method": "bdev_nvme_attach_controller" 00:21:54.994 },{ 00:21:54.994 "params": { 00:21:54.994 "name": "Nvme8", 00:21:54.994 "trtype": "tcp", 00:21:54.994 "traddr": "10.0.0.2", 00:21:54.994 "adrfam": "ipv4", 00:21:54.994 "trsvcid": "4420", 00:21:54.994 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:54.994 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:54.994 "hdgst": false, 00:21:54.994 "ddgst": false 00:21:54.994 }, 00:21:54.994 "method": "bdev_nvme_attach_controller" 00:21:54.994 },{ 00:21:54.994 "params": { 00:21:54.994 "name": "Nvme9", 00:21:54.994 "trtype": "tcp", 00:21:54.994 "traddr": "10.0.0.2", 00:21:54.994 "adrfam": "ipv4", 00:21:54.994 "trsvcid": "4420", 00:21:54.994 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:54.994 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:54.994 "hdgst": false, 00:21:54.994 "ddgst": false 00:21:54.994 }, 00:21:54.994 "method": "bdev_nvme_attach_controller" 00:21:54.994 },{ 00:21:54.994 "params": { 00:21:54.994 "name": "Nvme10", 00:21:54.994 "trtype": "tcp", 00:21:54.994 "traddr": "10.0.0.2", 00:21:54.994 "adrfam": "ipv4", 00:21:54.994 "trsvcid": "4420", 00:21:54.994 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:54.994 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:54.994 "hdgst": false, 00:21:54.994 "ddgst": false 00:21:54.994 }, 00:21:54.994 "method": "bdev_nvme_attach_controller" 00:21:54.994 }' 00:21:54.994 [2024-07-15 07:57:39.710229] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.253 [2024-07-15 07:57:39.784014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.630 Running I/O for 1 seconds... 00:21:57.571 00:21:57.571 Latency(us) 00:21:57.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.571 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.571 Verification LBA range: start 0x0 length 0x400 00:21:57.571 Nvme1n1 : 1.03 249.68 15.61 0.00 0.00 253849.38 17324.30 217009.64 00:21:57.571 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.571 Verification LBA range: start 0x0 length 0x400 00:21:57.571 Nvme2n1 : 1.14 280.73 17.55 0.00 0.00 221633.49 17210.32 217009.64 00:21:57.571 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.571 Verification LBA range: start 0x0 length 0x400 00:21:57.571 Nvme3n1 : 1.12 290.44 18.15 0.00 0.00 210994.71 8092.27 213362.42 00:21:57.571 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.571 Verification LBA range: start 0x0 length 0x400 00:21:57.571 Nvme4n1 : 1.14 283.22 17.70 0.00 0.00 214356.89 2407.74 201508.95 00:21:57.571 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.571 Verification LBA range: start 0x0 length 0x400 00:21:57.571 Nvme5n1 : 1.15 278.61 17.41 0.00 0.00 214999.04 19261.89 233422.14 00:21:57.571 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.571 Verification LBA range: start 0x0 length 0x400 00:21:57.571 Nvme6n1 : 1.14 281.54 17.60 0.00 0.00 209514.50 15956.59 220656.86 00:21:57.571 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.571 Verification LBA range: start 0x0 length 0x400 00:21:57.571 Nvme7n1 : 1.13 283.05 17.69 0.00 0.00 205094.73 16184.54 213362.42 00:21:57.571 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.571 Verification LBA range: start 0x0 length 0x400 00:21:57.571 Nvme8n1 : 1.12 284.45 17.78 0.00 0.00 200784.23 13791.05 223392.28 00:21:57.571 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.571 Verification LBA range: start 0x0 length 0x400 00:21:57.571 Nvme9n1 : 1.15 281.71 17.61 0.00 0.00 199785.66 3504.75 221568.67 00:21:57.571 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:57.571 Verification LBA range: start 0x0 length 0x400 00:21:57.571 Nvme10n1 : 1.15 277.77 17.36 0.00 0.00 199868.42 16412.49 238892.97 00:21:57.571 =================================================================================================================== 00:21:57.571 Total : 2791.20 174.45 0.00 0.00 212237.11 2407.74 238892.97 00:21:57.830 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:57.830 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:57.830 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:57.830 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:57.830 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:57.830 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:57.831 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:57.831 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:57.831 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:57.831 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:57.831 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:57.831 rmmod nvme_tcp 00:21:57.831 rmmod nvme_fabrics 00:21:57.831 rmmod nvme_keyring 00:21:57.831 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:57.831 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:57.831 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:57.831 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3315560 ']' 00:21:57.831 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3315560 00:21:57.831 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 3315560 ']' 00:21:57.831 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 3315560 00:21:57.831 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:21:57.831 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:57.831 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3315560 00:21:58.089 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:58.089 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:58.089 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3315560' 00:21:58.089 killing process with pid 3315560 00:21:58.089 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 3315560 00:21:58.089 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 3315560 00:21:58.348 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:58.348 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:58.348 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:58.348 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:58.348 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:58.348 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.348 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.348 07:57:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:00.886 00:22:00.886 real 0m15.138s 00:22:00.886 user 0m33.690s 00:22:00.886 sys 0m5.654s 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.886 ************************************ 00:22:00.886 END TEST nvmf_shutdown_tc1 00:22:00.886 ************************************ 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:00.886 ************************************ 00:22:00.886 START TEST nvmf_shutdown_tc2 00:22:00.886 ************************************ 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:00.886 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:00.886 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:00.886 Found net devices under 0000:86:00.0: cvl_0_0 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:00.886 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:00.887 Found net devices under 0000:86:00.1: cvl_0_1 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:00.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:00.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:22:00.887 00:22:00.887 --- 10.0.0.2 ping statistics --- 00:22:00.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.887 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:00.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:00.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:22:00.887 00:22:00.887 --- 10.0.0.1 ping statistics --- 00:22:00.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.887 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3317236 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3317236 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3317236 ']' 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:00.887 07:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:00.887 [2024-07-15 07:57:45.482344] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:22:00.887 [2024-07-15 07:57:45.482405] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.887 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.887 [2024-07-15 07:57:45.552871] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:00.887 [2024-07-15 07:57:45.633752] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.887 [2024-07-15 07:57:45.633782] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.887 [2024-07-15 07:57:45.633790] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.887 [2024-07-15 07:57:45.633796] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.887 [2024-07-15 07:57:45.633801] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.887 [2024-07-15 07:57:45.633908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.887 [2024-07-15 07:57:45.634034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:00.887 [2024-07-15 07:57:45.634074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.887 [2024-07-15 07:57:45.634074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.863 [2024-07-15 07:57:46.328297] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.863 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.863 Malloc1 00:22:01.863 [2024-07-15 07:57:46.423973] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.863 Malloc2 00:22:01.863 Malloc3 00:22:01.863 Malloc4 00:22:01.863 Malloc5 00:22:01.863 Malloc6 00:22:02.122 Malloc7 00:22:02.122 Malloc8 00:22:02.122 Malloc9 00:22:02.122 Malloc10 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3317518 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3317518 /var/tmp/bdevperf.sock 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3317518 ']' 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:02.122 { 00:22:02.122 "params": { 00:22:02.122 "name": "Nvme$subsystem", 00:22:02.122 "trtype": "$TEST_TRANSPORT", 00:22:02.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.122 "adrfam": "ipv4", 00:22:02.122 "trsvcid": "$NVMF_PORT", 00:22:02.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.122 "hdgst": ${hdgst:-false}, 00:22:02.122 "ddgst": ${ddgst:-false} 00:22:02.122 }, 00:22:02.122 "method": "bdev_nvme_attach_controller" 00:22:02.122 } 00:22:02.122 EOF 00:22:02.122 )") 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:02.122 { 00:22:02.122 "params": { 00:22:02.122 "name": "Nvme$subsystem", 00:22:02.122 "trtype": "$TEST_TRANSPORT", 00:22:02.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.122 "adrfam": "ipv4", 00:22:02.122 "trsvcid": "$NVMF_PORT", 00:22:02.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.122 "hdgst": ${hdgst:-false}, 00:22:02.122 "ddgst": ${ddgst:-false} 00:22:02.122 }, 00:22:02.122 "method": "bdev_nvme_attach_controller" 00:22:02.122 } 00:22:02.122 EOF 00:22:02.122 )") 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:02.122 { 00:22:02.122 "params": { 00:22:02.122 "name": "Nvme$subsystem", 00:22:02.122 "trtype": "$TEST_TRANSPORT", 00:22:02.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.122 "adrfam": "ipv4", 00:22:02.122 "trsvcid": "$NVMF_PORT", 00:22:02.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.122 "hdgst": ${hdgst:-false}, 00:22:02.122 "ddgst": ${ddgst:-false} 00:22:02.122 }, 00:22:02.122 "method": "bdev_nvme_attach_controller" 00:22:02.122 } 00:22:02.122 EOF 00:22:02.122 )") 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:02.122 { 00:22:02.122 "params": { 00:22:02.122 "name": "Nvme$subsystem", 00:22:02.122 "trtype": "$TEST_TRANSPORT", 00:22:02.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.122 "adrfam": "ipv4", 00:22:02.122 "trsvcid": "$NVMF_PORT", 00:22:02.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.122 "hdgst": ${hdgst:-false}, 00:22:02.122 "ddgst": ${ddgst:-false} 00:22:02.122 }, 00:22:02.122 "method": "bdev_nvme_attach_controller" 00:22:02.122 } 00:22:02.122 EOF 00:22:02.122 )") 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:02.122 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:02.122 { 00:22:02.122 "params": { 00:22:02.122 "name": "Nvme$subsystem", 00:22:02.122 "trtype": "$TEST_TRANSPORT", 00:22:02.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.122 "adrfam": "ipv4", 00:22:02.122 "trsvcid": "$NVMF_PORT", 00:22:02.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.122 "hdgst": ${hdgst:-false}, 00:22:02.122 "ddgst": ${ddgst:-false} 00:22:02.122 }, 00:22:02.122 "method": "bdev_nvme_attach_controller" 00:22:02.122 } 00:22:02.123 EOF 00:22:02.123 )") 00:22:02.381 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:02.381 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:02.381 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:02.381 { 00:22:02.382 "params": { 00:22:02.382 "name": "Nvme$subsystem", 00:22:02.382 "trtype": "$TEST_TRANSPORT", 00:22:02.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.382 "adrfam": "ipv4", 00:22:02.382 "trsvcid": "$NVMF_PORT", 00:22:02.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.382 "hdgst": ${hdgst:-false}, 00:22:02.382 "ddgst": ${ddgst:-false} 00:22:02.382 }, 00:22:02.382 "method": "bdev_nvme_attach_controller" 00:22:02.382 } 00:22:02.382 EOF 00:22:02.382 )") 00:22:02.382 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:02.382 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:02.382 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:02.382 { 00:22:02.382 "params": { 00:22:02.382 "name": "Nvme$subsystem", 00:22:02.382 "trtype": "$TEST_TRANSPORT", 00:22:02.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.382 "adrfam": "ipv4", 00:22:02.382 "trsvcid": "$NVMF_PORT", 00:22:02.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.382 "hdgst": ${hdgst:-false}, 00:22:02.382 "ddgst": ${ddgst:-false} 00:22:02.382 }, 00:22:02.382 "method": "bdev_nvme_attach_controller" 00:22:02.382 } 00:22:02.382 EOF 00:22:02.382 )") 00:22:02.382 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:02.382 [2024-07-15 07:57:46.888703] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:22:02.382 [2024-07-15 07:57:46.888755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3317518 ] 00:22:02.382 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:02.382 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:02.382 { 00:22:02.382 "params": { 00:22:02.382 "name": "Nvme$subsystem", 00:22:02.382 "trtype": "$TEST_TRANSPORT", 00:22:02.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.382 "adrfam": "ipv4", 00:22:02.382 "trsvcid": "$NVMF_PORT", 00:22:02.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.382 "hdgst": ${hdgst:-false}, 00:22:02.382 "ddgst": ${ddgst:-false} 00:22:02.382 }, 00:22:02.382 "method": "bdev_nvme_attach_controller" 00:22:02.382 } 00:22:02.382 EOF 00:22:02.382 )") 00:22:02.382 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:02.382 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:02.382 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:02.382 { 00:22:02.382 "params": { 00:22:02.382 "name": "Nvme$subsystem", 00:22:02.382 "trtype": "$TEST_TRANSPORT", 00:22:02.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.382 "adrfam": "ipv4", 00:22:02.382 "trsvcid": "$NVMF_PORT", 00:22:02.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.382 "hdgst": ${hdgst:-false}, 00:22:02.382 "ddgst": ${ddgst:-false} 00:22:02.382 }, 00:22:02.382 "method": "bdev_nvme_attach_controller" 00:22:02.382 } 00:22:02.382 EOF 00:22:02.382 )") 00:22:02.382 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:02.382 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:02.382 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:02.382 { 00:22:02.382 "params": { 00:22:02.382 "name": "Nvme$subsystem", 00:22:02.382 "trtype": "$TEST_TRANSPORT", 00:22:02.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.382 "adrfam": "ipv4", 00:22:02.382 "trsvcid": "$NVMF_PORT", 00:22:02.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.382 "hdgst": ${hdgst:-false}, 00:22:02.382 "ddgst": ${ddgst:-false} 00:22:02.382 }, 00:22:02.382 "method": "bdev_nvme_attach_controller" 00:22:02.382 } 00:22:02.382 EOF 00:22:02.382 )") 00:22:02.382 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:02.382 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:22:02.382 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.382 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:22:02.382 07:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:02.382 "params": { 00:22:02.382 "name": "Nvme1", 00:22:02.382 "trtype": "tcp", 00:22:02.382 "traddr": "10.0.0.2", 00:22:02.382 "adrfam": "ipv4", 00:22:02.382 "trsvcid": "4420", 00:22:02.382 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.382 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:02.382 "hdgst": false, 00:22:02.382 "ddgst": false 00:22:02.382 }, 00:22:02.382 "method": "bdev_nvme_attach_controller" 00:22:02.382 },{ 00:22:02.382 "params": { 00:22:02.382 "name": "Nvme2", 00:22:02.382 "trtype": "tcp", 00:22:02.382 "traddr": "10.0.0.2", 00:22:02.382 "adrfam": "ipv4", 00:22:02.382 "trsvcid": "4420", 00:22:02.382 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:02.382 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:02.382 "hdgst": false, 00:22:02.382 "ddgst": false 00:22:02.382 }, 00:22:02.382 "method": "bdev_nvme_attach_controller" 00:22:02.382 },{ 00:22:02.382 "params": { 00:22:02.382 "name": "Nvme3", 00:22:02.382 "trtype": "tcp", 00:22:02.382 "traddr": "10.0.0.2", 00:22:02.382 "adrfam": "ipv4", 00:22:02.382 "trsvcid": "4420", 00:22:02.382 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:02.382 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:02.383 "hdgst": false, 00:22:02.383 "ddgst": false 00:22:02.383 }, 00:22:02.383 "method": "bdev_nvme_attach_controller" 00:22:02.383 },{ 00:22:02.383 "params": { 00:22:02.383 "name": "Nvme4", 00:22:02.383 "trtype": "tcp", 00:22:02.383 "traddr": "10.0.0.2", 00:22:02.383 "adrfam": "ipv4", 00:22:02.383 "trsvcid": "4420", 00:22:02.383 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:02.383 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:02.383 "hdgst": false, 00:22:02.383 "ddgst": false 00:22:02.383 }, 00:22:02.383 "method": "bdev_nvme_attach_controller" 00:22:02.383 },{ 00:22:02.383 "params": { 00:22:02.383 "name": "Nvme5", 00:22:02.383 "trtype": "tcp", 00:22:02.383 "traddr": "10.0.0.2", 00:22:02.383 "adrfam": "ipv4", 00:22:02.383 "trsvcid": "4420", 00:22:02.383 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:02.383 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:02.383 "hdgst": false, 00:22:02.383 "ddgst": false 00:22:02.383 }, 00:22:02.383 "method": "bdev_nvme_attach_controller" 00:22:02.383 },{ 00:22:02.383 "params": { 00:22:02.383 "name": "Nvme6", 00:22:02.383 "trtype": "tcp", 00:22:02.383 "traddr": "10.0.0.2", 00:22:02.383 "adrfam": "ipv4", 00:22:02.383 "trsvcid": "4420", 00:22:02.383 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:02.383 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:02.383 "hdgst": false, 00:22:02.383 "ddgst": false 00:22:02.383 }, 00:22:02.383 "method": "bdev_nvme_attach_controller" 00:22:02.383 },{ 00:22:02.383 "params": { 00:22:02.383 "name": "Nvme7", 00:22:02.383 "trtype": "tcp", 00:22:02.383 "traddr": "10.0.0.2", 00:22:02.383 "adrfam": "ipv4", 00:22:02.383 "trsvcid": "4420", 00:22:02.383 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:02.383 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:02.383 "hdgst": false, 00:22:02.383 "ddgst": false 00:22:02.383 }, 00:22:02.383 "method": "bdev_nvme_attach_controller" 00:22:02.383 },{ 00:22:02.383 "params": { 00:22:02.383 "name": "Nvme8", 00:22:02.383 "trtype": "tcp", 00:22:02.383 "traddr": "10.0.0.2", 00:22:02.383 "adrfam": "ipv4", 00:22:02.383 "trsvcid": "4420", 00:22:02.383 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:02.383 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:02.383 "hdgst": false, 00:22:02.383 "ddgst": false 00:22:02.383 }, 00:22:02.383 "method": "bdev_nvme_attach_controller" 00:22:02.383 },{ 00:22:02.383 "params": { 00:22:02.383 "name": "Nvme9", 00:22:02.383 "trtype": "tcp", 00:22:02.383 "traddr": "10.0.0.2", 00:22:02.383 "adrfam": "ipv4", 00:22:02.383 "trsvcid": "4420", 00:22:02.383 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:02.383 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:02.383 "hdgst": false, 00:22:02.383 "ddgst": false 00:22:02.383 }, 00:22:02.383 "method": "bdev_nvme_attach_controller" 00:22:02.383 },{ 00:22:02.383 "params": { 00:22:02.383 "name": "Nvme10", 00:22:02.383 "trtype": "tcp", 00:22:02.383 "traddr": "10.0.0.2", 00:22:02.383 "adrfam": "ipv4", 00:22:02.383 "trsvcid": "4420", 00:22:02.383 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:02.383 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:02.383 "hdgst": false, 00:22:02.383 "ddgst": false 00:22:02.383 }, 00:22:02.383 "method": "bdev_nvme_attach_controller" 00:22:02.383 }' 00:22:02.383 [2024-07-15 07:57:46.943067] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.383 [2024-07-15 07:57:47.018321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.759 Running I/O for 10 seconds... 00:22:03.759 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.759 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:22:03.759 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:03.759 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.760 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:04.019 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.019 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:04.019 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:04.019 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:04.019 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:22:04.019 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:22:04.019 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:04.019 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:04.019 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:04.019 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:04.019 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.019 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:04.019 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.019 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:04.019 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:04.019 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:04.277 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:04.277 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:04.277 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:04.277 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.277 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:04.277 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:04.277 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.277 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:04.277 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:04.277 07:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:04.535 07:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:04.535 07:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:04.535 07:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:04.535 07:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:04.535 07:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.535 07:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:04.535 07:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.535 07:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:22:04.535 07:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:22:04.535 07:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:22:04.535 07:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:22:04.536 07:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:22:04.536 07:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3317518 00:22:04.536 07:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3317518 ']' 00:22:04.536 07:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3317518 00:22:04.536 07:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:22:04.536 07:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:04.536 07:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3317518 00:22:04.536 07:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:04.536 07:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:04.536 07:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3317518' 00:22:04.536 killing process with pid 3317518 00:22:04.536 07:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3317518 00:22:04.536 07:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3317518 00:22:04.794 Received shutdown signal, test time was about 0.921256 seconds 00:22:04.794 00:22:04.794 Latency(us) 00:22:04.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.794 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.794 Verification LBA range: start 0x0 length 0x400 00:22:04.794 Nvme1n1 : 0.90 283.55 17.72 0.00 0.00 223384.04 18919.96 203332.56 00:22:04.794 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.794 Verification LBA range: start 0x0 length 0x400 00:22:04.795 Nvme2n1 : 0.92 282.49 17.66 0.00 0.00 219779.58 3405.02 217009.64 00:22:04.795 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.795 Verification LBA range: start 0x0 length 0x400 00:22:04.795 Nvme3n1 : 0.92 344.73 21.55 0.00 0.00 176260.47 7693.36 215186.03 00:22:04.795 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.795 Verification LBA range: start 0x0 length 0x400 00:22:04.795 Nvme4n1 : 0.91 284.92 17.81 0.00 0.00 209731.64 4673.00 217921.45 00:22:04.795 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.795 Verification LBA range: start 0x0 length 0x400 00:22:04.795 Nvme5n1 : 0.92 279.75 17.48 0.00 0.00 210474.52 16640.45 215186.03 00:22:04.795 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.795 Verification LBA range: start 0x0 length 0x400 00:22:04.795 Nvme6n1 : 0.91 282.40 17.65 0.00 0.00 204459.41 18578.03 219745.06 00:22:04.795 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.795 Verification LBA range: start 0x0 length 0x400 00:22:04.795 Nvme7n1 : 0.91 281.10 17.57 0.00 0.00 201539.90 17210.32 216097.84 00:22:04.795 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.795 Verification LBA range: start 0x0 length 0x400 00:22:04.795 Nvme8n1 : 0.90 289.57 18.10 0.00 0.00 190761.65 2436.23 221568.67 00:22:04.795 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.795 Verification LBA range: start 0x0 length 0x400 00:22:04.795 Nvme9n1 : 0.88 217.14 13.57 0.00 0.00 249477.86 34192.70 217921.45 00:22:04.795 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.795 Verification LBA range: start 0x0 length 0x400 00:22:04.795 Nvme10n1 : 0.89 216.51 13.53 0.00 0.00 244854.72 18464.06 233422.14 00:22:04.795 =================================================================================================================== 00:22:04.795 Total : 2762.17 172.64 0.00 0.00 210403.59 2436.23 233422.14 00:22:04.795 07:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3317236 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:06.170 rmmod nvme_tcp 00:22:06.170 rmmod nvme_fabrics 00:22:06.170 rmmod nvme_keyring 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3317236 ']' 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3317236 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3317236 ']' 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3317236 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3317236 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3317236' 00:22:06.170 killing process with pid 3317236 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3317236 00:22:06.170 07:57:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3317236 00:22:06.429 07:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:06.429 07:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:06.429 07:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:06.429 07:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:06.429 07:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:06.429 07:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.429 07:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:06.429 07:57:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.356 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:08.356 00:22:08.356 real 0m7.947s 00:22:08.356 user 0m24.068s 00:22:08.356 sys 0m1.305s 00:22:08.356 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:08.356 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:08.356 ************************************ 00:22:08.356 END TEST nvmf_shutdown_tc2 00:22:08.356 ************************************ 00:22:08.356 07:57:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:22:08.356 07:57:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:08.356 07:57:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:08.356 07:57:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:08.356 07:57:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:08.615 ************************************ 00:22:08.615 START TEST nvmf_shutdown_tc3 00:22:08.615 ************************************ 00:22:08.615 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:22:08.615 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:22:08.615 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:08.615 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:08.615 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.615 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:08.615 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:08.615 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:08.615 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.615 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:08.615 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.615 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:08.615 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:08.615 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:08.615 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:08.615 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:08.615 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:08.615 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:08.616 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:08.616 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:08.616 Found net devices under 0000:86:00.0: cvl_0_0 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:08.616 Found net devices under 0000:86:00.1: cvl_0_1 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:08.616 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:08.873 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:08.873 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:08.873 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:08.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:22:08.873 00:22:08.873 --- 10.0.0.2 ping statistics --- 00:22:08.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.873 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:22:08.873 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:08.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:22:08.873 00:22:08.873 --- 10.0.0.1 ping statistics --- 00:22:08.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.873 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:22:08.873 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.873 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:22:08.873 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:08.873 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.873 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:08.873 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:08.873 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.874 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:08.874 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:08.874 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:08.874 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:08.874 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:08.874 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:08.874 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3318726 00:22:08.874 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3318726 00:22:08.874 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:08.874 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3318726 ']' 00:22:08.874 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.874 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:08.874 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.874 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:08.874 07:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:08.874 [2024-07-15 07:57:53.513241] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:22:08.874 [2024-07-15 07:57:53.513283] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.874 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.874 [2024-07-15 07:57:53.584371] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:09.131 [2024-07-15 07:57:53.659375] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.131 [2024-07-15 07:57:53.659416] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.131 [2024-07-15 07:57:53.659423] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.131 [2024-07-15 07:57:53.659429] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.131 [2024-07-15 07:57:53.659433] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.131 [2024-07-15 07:57:53.659581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.131 [2024-07-15 07:57:53.659690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:09.131 [2024-07-15 07:57:53.659795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.131 [2024-07-15 07:57:53.659797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:09.696 [2024-07-15 07:57:54.356069] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.696 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:09.696 Malloc1 00:22:09.954 [2024-07-15 07:57:54.451787] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.954 Malloc2 00:22:09.954 Malloc3 00:22:09.954 Malloc4 00:22:09.954 Malloc5 00:22:09.954 Malloc6 00:22:09.954 Malloc7 00:22:10.212 Malloc8 00:22:10.212 Malloc9 00:22:10.212 Malloc10 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3319015 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3319015 /var/tmp/bdevperf.sock 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3319015 ']' 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:10.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:10.212 { 00:22:10.212 "params": { 00:22:10.212 "name": "Nvme$subsystem", 00:22:10.212 "trtype": "$TEST_TRANSPORT", 00:22:10.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.212 "adrfam": "ipv4", 00:22:10.212 "trsvcid": "$NVMF_PORT", 00:22:10.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.212 "hdgst": ${hdgst:-false}, 00:22:10.212 "ddgst": ${ddgst:-false} 00:22:10.212 }, 00:22:10.212 "method": "bdev_nvme_attach_controller" 00:22:10.212 } 00:22:10.212 EOF 00:22:10.212 )") 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:10.212 { 00:22:10.212 "params": { 00:22:10.212 "name": "Nvme$subsystem", 00:22:10.212 "trtype": "$TEST_TRANSPORT", 00:22:10.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.212 "adrfam": "ipv4", 00:22:10.212 "trsvcid": "$NVMF_PORT", 00:22:10.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.212 "hdgst": ${hdgst:-false}, 00:22:10.212 "ddgst": ${ddgst:-false} 00:22:10.212 }, 00:22:10.212 "method": "bdev_nvme_attach_controller" 00:22:10.212 } 00:22:10.212 EOF 00:22:10.212 )") 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:10.212 { 00:22:10.212 "params": { 00:22:10.212 "name": "Nvme$subsystem", 00:22:10.212 "trtype": "$TEST_TRANSPORT", 00:22:10.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.212 "adrfam": "ipv4", 00:22:10.212 "trsvcid": "$NVMF_PORT", 00:22:10.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.212 "hdgst": ${hdgst:-false}, 00:22:10.212 "ddgst": ${ddgst:-false} 00:22:10.212 }, 00:22:10.212 "method": "bdev_nvme_attach_controller" 00:22:10.212 } 00:22:10.212 EOF 00:22:10.212 )") 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:10.212 { 00:22:10.212 "params": { 00:22:10.212 "name": "Nvme$subsystem", 00:22:10.212 "trtype": "$TEST_TRANSPORT", 00:22:10.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.212 "adrfam": "ipv4", 00:22:10.212 "trsvcid": "$NVMF_PORT", 00:22:10.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.212 "hdgst": ${hdgst:-false}, 00:22:10.212 "ddgst": ${ddgst:-false} 00:22:10.212 }, 00:22:10.212 "method": "bdev_nvme_attach_controller" 00:22:10.212 } 00:22:10.212 EOF 00:22:10.212 )") 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:10.212 { 00:22:10.212 "params": { 00:22:10.212 "name": "Nvme$subsystem", 00:22:10.212 "trtype": "$TEST_TRANSPORT", 00:22:10.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.212 "adrfam": "ipv4", 00:22:10.212 "trsvcid": "$NVMF_PORT", 00:22:10.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.212 "hdgst": ${hdgst:-false}, 00:22:10.212 "ddgst": ${ddgst:-false} 00:22:10.212 }, 00:22:10.212 "method": "bdev_nvme_attach_controller" 00:22:10.212 } 00:22:10.212 EOF 00:22:10.212 )") 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:10.212 { 00:22:10.212 "params": { 00:22:10.212 "name": "Nvme$subsystem", 00:22:10.212 "trtype": "$TEST_TRANSPORT", 00:22:10.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.212 "adrfam": "ipv4", 00:22:10.212 "trsvcid": "$NVMF_PORT", 00:22:10.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.212 "hdgst": ${hdgst:-false}, 00:22:10.212 "ddgst": ${ddgst:-false} 00:22:10.212 }, 00:22:10.212 "method": "bdev_nvme_attach_controller" 00:22:10.212 } 00:22:10.212 EOF 00:22:10.212 )") 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:10.212 { 00:22:10.212 "params": { 00:22:10.212 "name": "Nvme$subsystem", 00:22:10.212 "trtype": "$TEST_TRANSPORT", 00:22:10.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.212 "adrfam": "ipv4", 00:22:10.212 "trsvcid": "$NVMF_PORT", 00:22:10.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.212 "hdgst": ${hdgst:-false}, 00:22:10.212 "ddgst": ${ddgst:-false} 00:22:10.212 }, 00:22:10.212 "method": "bdev_nvme_attach_controller" 00:22:10.212 } 00:22:10.212 EOF 00:22:10.212 )") 00:22:10.212 [2024-07-15 07:57:54.924355] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:22:10.212 [2024-07-15 07:57:54.924403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3319015 ] 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:10.212 { 00:22:10.212 "params": { 00:22:10.212 "name": "Nvme$subsystem", 00:22:10.212 "trtype": "$TEST_TRANSPORT", 00:22:10.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.212 "adrfam": "ipv4", 00:22:10.212 "trsvcid": "$NVMF_PORT", 00:22:10.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.212 "hdgst": ${hdgst:-false}, 00:22:10.212 "ddgst": ${ddgst:-false} 00:22:10.212 }, 00:22:10.212 "method": "bdev_nvme_attach_controller" 00:22:10.212 } 00:22:10.212 EOF 00:22:10.212 )") 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:10.212 { 00:22:10.212 "params": { 00:22:10.212 "name": "Nvme$subsystem", 00:22:10.212 "trtype": "$TEST_TRANSPORT", 00:22:10.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.212 "adrfam": "ipv4", 00:22:10.212 "trsvcid": "$NVMF_PORT", 00:22:10.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.212 "hdgst": ${hdgst:-false}, 00:22:10.212 "ddgst": ${ddgst:-false} 00:22:10.212 }, 00:22:10.212 "method": "bdev_nvme_attach_controller" 00:22:10.212 } 00:22:10.212 EOF 00:22:10.212 )") 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:10.212 { 00:22:10.212 "params": { 00:22:10.212 "name": "Nvme$subsystem", 00:22:10.212 "trtype": "$TEST_TRANSPORT", 00:22:10.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.212 "adrfam": "ipv4", 00:22:10.212 "trsvcid": "$NVMF_PORT", 00:22:10.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.212 "hdgst": ${hdgst:-false}, 00:22:10.212 "ddgst": ${ddgst:-false} 00:22:10.212 }, 00:22:10.212 "method": "bdev_nvme_attach_controller" 00:22:10.212 } 00:22:10.212 EOF 00:22:10.212 )") 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:10.212 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:22:10.212 07:57:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:10.212 "params": { 00:22:10.212 "name": "Nvme1", 00:22:10.212 "trtype": "tcp", 00:22:10.212 "traddr": "10.0.0.2", 00:22:10.212 "adrfam": "ipv4", 00:22:10.212 "trsvcid": "4420", 00:22:10.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.212 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:10.212 "hdgst": false, 00:22:10.212 "ddgst": false 00:22:10.212 }, 00:22:10.212 "method": "bdev_nvme_attach_controller" 00:22:10.212 },{ 00:22:10.212 "params": { 00:22:10.212 "name": "Nvme2", 00:22:10.212 "trtype": "tcp", 00:22:10.212 "traddr": "10.0.0.2", 00:22:10.212 "adrfam": "ipv4", 00:22:10.212 "trsvcid": "4420", 00:22:10.212 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:10.212 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:10.212 "hdgst": false, 00:22:10.212 "ddgst": false 00:22:10.212 }, 00:22:10.212 "method": "bdev_nvme_attach_controller" 00:22:10.212 },{ 00:22:10.212 "params": { 00:22:10.212 "name": "Nvme3", 00:22:10.212 "trtype": "tcp", 00:22:10.212 "traddr": "10.0.0.2", 00:22:10.212 "adrfam": "ipv4", 00:22:10.212 "trsvcid": "4420", 00:22:10.212 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:10.212 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:10.212 "hdgst": false, 00:22:10.212 "ddgst": false 00:22:10.212 }, 00:22:10.212 "method": "bdev_nvme_attach_controller" 00:22:10.212 },{ 00:22:10.212 "params": { 00:22:10.212 "name": "Nvme4", 00:22:10.212 "trtype": "tcp", 00:22:10.212 "traddr": "10.0.0.2", 00:22:10.212 "adrfam": "ipv4", 00:22:10.212 "trsvcid": "4420", 00:22:10.212 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:10.212 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:10.212 "hdgst": false, 00:22:10.212 "ddgst": false 00:22:10.212 }, 00:22:10.212 "method": "bdev_nvme_attach_controller" 00:22:10.212 },{ 00:22:10.212 "params": { 00:22:10.212 "name": "Nvme5", 00:22:10.212 "trtype": "tcp", 00:22:10.212 "traddr": "10.0.0.2", 00:22:10.212 "adrfam": "ipv4", 00:22:10.212 "trsvcid": "4420", 00:22:10.212 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:10.212 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:10.212 "hdgst": false, 00:22:10.212 "ddgst": false 00:22:10.212 }, 00:22:10.212 "method": "bdev_nvme_attach_controller" 00:22:10.212 },{ 00:22:10.212 "params": { 00:22:10.212 "name": "Nvme6", 00:22:10.212 "trtype": "tcp", 00:22:10.212 "traddr": "10.0.0.2", 00:22:10.212 "adrfam": "ipv4", 00:22:10.212 "trsvcid": "4420", 00:22:10.212 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:10.212 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:10.212 "hdgst": false, 00:22:10.212 "ddgst": false 00:22:10.212 }, 00:22:10.212 "method": "bdev_nvme_attach_controller" 00:22:10.212 },{ 00:22:10.212 "params": { 00:22:10.212 "name": "Nvme7", 00:22:10.212 "trtype": "tcp", 00:22:10.212 "traddr": "10.0.0.2", 00:22:10.212 "adrfam": "ipv4", 00:22:10.212 "trsvcid": "4420", 00:22:10.212 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:10.212 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:10.212 "hdgst": false, 00:22:10.212 "ddgst": false 00:22:10.212 }, 00:22:10.212 "method": "bdev_nvme_attach_controller" 00:22:10.212 },{ 00:22:10.212 "params": { 00:22:10.212 "name": "Nvme8", 00:22:10.212 "trtype": "tcp", 00:22:10.212 "traddr": "10.0.0.2", 00:22:10.212 "adrfam": "ipv4", 00:22:10.212 "trsvcid": "4420", 00:22:10.213 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:10.213 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:10.213 "hdgst": false, 00:22:10.213 "ddgst": false 00:22:10.213 }, 00:22:10.213 "method": "bdev_nvme_attach_controller" 00:22:10.213 },{ 00:22:10.213 "params": { 00:22:10.213 "name": "Nvme9", 00:22:10.213 "trtype": "tcp", 00:22:10.213 "traddr": "10.0.0.2", 00:22:10.213 "adrfam": "ipv4", 00:22:10.213 "trsvcid": "4420", 00:22:10.213 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:10.213 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:10.213 "hdgst": false, 00:22:10.213 "ddgst": false 00:22:10.213 }, 00:22:10.213 "method": "bdev_nvme_attach_controller" 00:22:10.213 },{ 00:22:10.213 "params": { 00:22:10.213 "name": "Nvme10", 00:22:10.213 "trtype": "tcp", 00:22:10.213 "traddr": "10.0.0.2", 00:22:10.213 "adrfam": "ipv4", 00:22:10.213 "trsvcid": "4420", 00:22:10.213 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:10.213 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:10.213 "hdgst": false, 00:22:10.213 "ddgst": false 00:22:10.213 }, 00:22:10.213 "method": "bdev_nvme_attach_controller" 00:22:10.213 }' 00:22:10.470 [2024-07-15 07:57:54.990856] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.470 [2024-07-15 07:57:55.063721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.373 Running I/O for 10 seconds... 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=148 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 148 -ge 100 ']' 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3318726 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 3318726 ']' 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 3318726 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3318726 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3318726' 00:22:12.967 killing process with pid 3318726 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 3318726 00:22:12.967 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 3318726 00:22:12.967 [2024-07-15 07:57:57.554381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.967 [2024-07-15 07:57:57.554726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.554732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.554738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.554746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.554752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.554758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.554764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.554770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.554776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.554782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.554788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.554794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.554800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.554806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.554812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.554818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.554824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.554830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.554837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.554843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.554848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819430 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.556560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fca00 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.557608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.557618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.557625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.557632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.557638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.557644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.557651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.968 [2024-07-15 07:57:57.557658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.557998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.558004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.558011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.558017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18198d0 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.969 [2024-07-15 07:57:57.560308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.560477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a230 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.561482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a6d0 is same with the state(5) to be set 00:22:12.970 [2024-07-15 07:57:57.562262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181ab70 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.562273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181ab70 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.562280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181ab70 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.562288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181ab70 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.562295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181ab70 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.562301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181ab70 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.562307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181ab70 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.562314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181ab70 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.562320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181ab70 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.971 [2024-07-15 07:57:57.563452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b010 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.972 [2024-07-15 07:57:57.564192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.972 [2024-07-15 07:57:57.564202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.972 [2024-07-15 07:57:57.564209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.972 [2024-07-15 07:57:57.564217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.972 [2024-07-15 07:57:57.564229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.972 [2024-07-15 07:57:57.564229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.972 [2024-07-15 07:57:57.564243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.972 [2024-07-15 07:57:57.564251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1189bf0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with [2024-07-15 07:57:57.564285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:22:12.972 id:0 cdw10:00000000 cdw11:00000000 00:22:12.972 [2024-07-15 07:57:57.564294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.972 [2024-07-15 07:57:57.564302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.972 [2024-07-15 07:57:57.564309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.972 [2024-07-15 07:57:57.564316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-15 07:57:57.564325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:12.972 the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 07:57:57.564333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.972 the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.972 [2024-07-15 07:57:57.564349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.972 [2024-07-15 07:57:57.564357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with [2024-07-15 07:57:57.564358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1165190 is same the state(5) to be set 00:22:12.972 with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.972 [2024-07-15 07:57:57.564385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 07:57:57.564393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.972 the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.972 [2024-07-15 07:57:57.564407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.972 [2024-07-15 07:57:57.564414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.972 [2024-07-15 07:57:57.564421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.972 [2024-07-15 07:57:57.564428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.972 [2024-07-15 07:57:57.564437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.972 [2024-07-15 07:57:57.564444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1186b30 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.972 [2024-07-15 07:57:57.564476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.972 [2024-07-15 07:57:57.564483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.972 [2024-07-15 07:57:57.564490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 07:57:57.564498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.972 the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with [2024-07-15 07:57:57.564508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(5) to be set 00:22:12.972 id:0 cdw10:00000000 cdw11:00000000 00:22:12.972 [2024-07-15 07:57:57.564515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.972 [2024-07-15 07:57:57.564522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.972 [2024-07-15 07:57:57.564530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.972 [2024-07-15 07:57:57.564536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc91340 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with [2024-07-15 07:57:57.564564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:22:12.972 id:0 cdw10:00000000 cdw11:00000000 00:22:12.972 [2024-07-15 07:57:57.564573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.972 [2024-07-15 07:57:57.564580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.972 [2024-07-15 07:57:57.564587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.972 [2024-07-15 07:57:57.564591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.972 [2024-07-15 07:57:57.564594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.564599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.973 [2024-07-15 07:57:57.564600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.564607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.973 [2024-07-15 07:57:57.564608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.564615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-15 07:57:57.564615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:12.973 the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.564624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 07:57:57.564624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.973 the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.564632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117f1d0 is same [2024-07-15 07:57:57.564633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with with the state(5) to be set 00:22:12.973 the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.564641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.564647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.564653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.564658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-15 07:57:57.564659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:12.973 the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.564669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.564670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.973 [2024-07-15 07:57:57.564676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.564679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.973 [2024-07-15 07:57:57.564683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.564686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.973 [2024-07-15 07:57:57.564690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4d0 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.564695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.973 [2024-07-15 07:57:57.564702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.973 [2024-07-15 07:57:57.564709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.973 [2024-07-15 07:57:57.564715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.973 [2024-07-15 07:57:57.564723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f78b0 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.564772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.973 [2024-07-15 07:57:57.564780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.973 [2024-07-15 07:57:57.564788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.973 [2024-07-15 07:57:57.564794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.973 [2024-07-15 07:57:57.564801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.973 [2024-07-15 07:57:57.564807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.973 [2024-07-15 07:57:57.564814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.973 [2024-07-15 07:57:57.564821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.973 [2024-07-15 07:57:57.564827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130e8d0 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.564847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.973 [2024-07-15 07:57:57.564855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.973 [2024-07-15 07:57:57.564863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.973 [2024-07-15 07:57:57.564869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.973 [2024-07-15 07:57:57.564878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.973 [2024-07-15 07:57:57.564884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.973 [2024-07-15 07:57:57.564891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.973 [2024-07-15 07:57:57.564897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.973 [2024-07-15 07:57:57.564903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1142c70 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.564925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.973 [2024-07-15 07:57:57.564933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.973 [2024-07-15 07:57:57.564940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.973 [2024-07-15 07:57:57.564946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.973 [2024-07-15 07:57:57.564953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.973 [2024-07-15 07:57:57.564959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.973 [2024-07-15 07:57:57.564966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.973 [2024-07-15 07:57:57.564972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.973 [2024-07-15 07:57:57.564978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1317050 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.565266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.565280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.565287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.565293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.565299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.565307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.565313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.565319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.565325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.565331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.565337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.565332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:1[2024-07-15 07:57:57.565343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.973 the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.565354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.565358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.973 [2024-07-15 07:57:57.565360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.565367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.565373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with [2024-07-15 07:57:57.565373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:1the state(5) to be set 00:22:12.973 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.973 [2024-07-15 07:57:57.565381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.565382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.973 [2024-07-15 07:57:57.565388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.565391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.973 [2024-07-15 07:57:57.565394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.565399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.973 [2024-07-15 07:57:57.565401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.565408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.565409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.973 [2024-07-15 07:57:57.565414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.565416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.973 [2024-07-15 07:57:57.565421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.973 [2024-07-15 07:57:57.565426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.973 [2024-07-15 07:57:57.565434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.973 [2024-07-15 07:57:57.565434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.974 [2024-07-15 07:57:57.565442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with [2024-07-15 07:57:57.565442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:1the state(5) to be set 00:22:12.974 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.974 [2024-07-15 07:57:57.565451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.974 [2024-07-15 07:57:57.565463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.974 [2024-07-15 07:57:57.565470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.974 [2024-07-15 07:57:57.565478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.974 [2024-07-15 07:57:57.565480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.974 [2024-07-15 07:57:57.565487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.974 [2024-07-15 07:57:57.565496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.974 [2024-07-15 07:57:57.565989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.974 [2024-07-15 07:57:57.565996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.975 [2024-07-15 07:57:57.566002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.975 [2024-07-15 07:57:57.566010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.975 [2024-07-15 07:57:57.566016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.975 [2024-07-15 07:57:57.566024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.975 [2024-07-15 07:57:57.566030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.975 [2024-07-15 07:57:57.566038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.975 [2024-07-15 07:57:57.566044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.975 [2024-07-15 07:57:57.566052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.975 [2024-07-15 07:57:57.566058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.975 [2024-07-15 07:57:57.566067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.975 [2024-07-15 07:57:57.566073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.975 [2024-07-15 07:57:57.566081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.975 [2024-07-15 07:57:57.566087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.975 [2024-07-15 07:57:57.566095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.975 [2024-07-15 07:57:57.566101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.975 [2024-07-15 07:57:57.566109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.975 [2024-07-15 07:57:57.566115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.975 [2024-07-15 07:57:57.566123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.975 [2024-07-15 07:57:57.566130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.975 [2024-07-15 07:57:57.566138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.975 [2024-07-15 07:57:57.566145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.975 [2024-07-15 07:57:57.566152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.975 [2024-07-15 07:57:57.566158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.975 [2024-07-15 07:57:57.566169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.975 [2024-07-15 07:57:57.566176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.975 [2024-07-15 07:57:57.566184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.975 [2024-07-15 07:57:57.566190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.975 [2024-07-15 07:57:57.566198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.975 [2024-07-15 07:57:57.566205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.975 [2024-07-15 07:57:57.566213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.975 [2024-07-15 07:57:57.566219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.975 [2024-07-15 07:57:57.566231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.975 [2024-07-15 07:57:57.566238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.975 [2024-07-15 07:57:57.566246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.975 [2024-07-15 07:57:57.566253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.975 [2024-07-15 07:57:57.566260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.975 [2024-07-15 07:57:57.566267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.975 [2024-07-15 07:57:57.566275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.975 [2024-07-15 07:57:57.566281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.975 [2024-07-15 07:57:57.566289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.975 [2024-07-15 07:57:57.566295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.975 [2024-07-15 07:57:57.566320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:12.975 [2024-07-15 07:57:57.566372] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x120b920 was disconnected and freed. reset controller. 00:22:12.975 [2024-07-15 07:57:57.566506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.975 [2024-07-15 07:57:57.566518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.975 [2024-07-15 07:57:57.566532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.566943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.566995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.567044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.567096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.567145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.976 [2024-07-15 07:57:57.567197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.976 [2024-07-15 07:57:57.567252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.567315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.567365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.567419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.567468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.567519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.567570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.567622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.567670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.577339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577455] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.577524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b970 is same with the state(5) to be set 00:22:12.977 [2024-07-15 07:57:57.589121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.589140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.589152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.589161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.589173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.589184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.589196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.589206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.589219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.589234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.589246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.589255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.589267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.589276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.589287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.589298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.589309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.589318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.589330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.589338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.589350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.589359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.589370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.589381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.589392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.589401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.589413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.589421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.589433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.589442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.589455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.589464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.589475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.589485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.589497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.589505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.589516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.589525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.589537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.589546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.589557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.589566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.589578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.589587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.589598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.589606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.977 [2024-07-15 07:57:57.589619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.977 [2024-07-15 07:57:57.589628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.589638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.978 [2024-07-15 07:57:57.589647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.589658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.978 [2024-07-15 07:57:57.589667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.589678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.978 [2024-07-15 07:57:57.589687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.589698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.978 [2024-07-15 07:57:57.589710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.589721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.978 [2024-07-15 07:57:57.589729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.589740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.978 [2024-07-15 07:57:57.589750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.589828] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1284910 was disconnected and freed. reset controller. 00:22:12.978 [2024-07-15 07:57:57.590135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1189bf0 (9): Bad file descriptor 00:22:12.978 [2024-07-15 07:57:57.590164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1165190 (9): Bad file descriptor 00:22:12.978 [2024-07-15 07:57:57.590178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1186b30 (9): Bad file descriptor 00:22:12.978 [2024-07-15 07:57:57.590194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc91340 (9): Bad file descriptor 00:22:12.978 [2024-07-15 07:57:57.590213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117f1d0 (9): Bad file descriptor 00:22:12.978 [2024-07-15 07:57:57.590240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f78b0 (9): Bad file descriptor 00:22:12.978 [2024-07-15 07:57:57.590275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.978 [2024-07-15 07:57:57.590288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.590299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.978 [2024-07-15 07:57:57.590308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.590319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.978 [2024-07-15 07:57:57.590329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.590339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.978 [2024-07-15 07:57:57.590348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.590357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130e0d0 is same with the state(5) to be set 00:22:12.978 [2024-07-15 07:57:57.590377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130e8d0 (9): Bad file descriptor 00:22:12.978 [2024-07-15 07:57:57.590395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1142c70 (9): Bad file descriptor 00:22:12.978 [2024-07-15 07:57:57.590413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1317050 (9): Bad file descriptor 00:22:12.978 [2024-07-15 07:57:57.594171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:12.978 [2024-07-15 07:57:57.594780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:12.978 [2024-07-15 07:57:57.594951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.978 [2024-07-15 07:57:57.594986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1317050 with addr=10.0.0.2, port=4420 00:22:12.978 [2024-07-15 07:57:57.595001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1317050 is same with the state(5) to be set 00:22:12.978 [2024-07-15 07:57:57.596077] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:12.978 [2024-07-15 07:57:57.596272] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:12.978 [2024-07-15 07:57:57.596491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.978 [2024-07-15 07:57:57.596514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1186b30 with addr=10.0.0.2, port=4420 00:22:12.978 [2024-07-15 07:57:57.596529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1186b30 is same with the state(5) to be set 00:22:12.978 [2024-07-15 07:57:57.596546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1317050 (9): Bad file descriptor 00:22:12.978 [2024-07-15 07:57:57.596605] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:12.978 [2024-07-15 07:57:57.596668] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:12.978 [2024-07-15 07:57:57.596746] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:12.978 [2024-07-15 07:57:57.596810] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:12.978 [2024-07-15 07:57:57.596896] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:12.978 [2024-07-15 07:57:57.596961] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:12.978 [2024-07-15 07:57:57.596998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1186b30 (9): Bad file descriptor 00:22:12.978 [2024-07-15 07:57:57.597017] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:12.978 [2024-07-15 07:57:57.597029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:12.978 [2024-07-15 07:57:57.597043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:12.978 [2024-07-15 07:57:57.597177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.978 [2024-07-15 07:57:57.597194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:12.978 [2024-07-15 07:57:57.597206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:12.978 [2024-07-15 07:57:57.597217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:12.978 [2024-07-15 07:57:57.597296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.978 [2024-07-15 07:57:57.600173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130e0d0 (9): Bad file descriptor 00:22:12.978 [2024-07-15 07:57:57.600370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.978 [2024-07-15 07:57:57.600392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.600412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.978 [2024-07-15 07:57:57.600425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.600440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.978 [2024-07-15 07:57:57.600453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.600475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.978 [2024-07-15 07:57:57.600487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.600502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.978 [2024-07-15 07:57:57.600515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.600529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.978 [2024-07-15 07:57:57.600542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.600557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.978 [2024-07-15 07:57:57.600569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.600585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.978 [2024-07-15 07:57:57.600598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.600613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.978 [2024-07-15 07:57:57.600626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.600646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.978 [2024-07-15 07:57:57.600658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.600673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.978 [2024-07-15 07:57:57.600684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.600700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.978 [2024-07-15 07:57:57.600712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.600726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.978 [2024-07-15 07:57:57.600739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.600754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.978 [2024-07-15 07:57:57.600766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.600781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.978 [2024-07-15 07:57:57.600794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.600808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.978 [2024-07-15 07:57:57.600834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.978 [2024-07-15 07:57:57.600844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.600851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.600862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.600870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.600879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.600888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.600898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.600906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.600915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.600924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.600934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.600941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.600951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.600960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.600970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.600979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.600989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.600997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.979 [2024-07-15 07:57:57.601617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.979 [2024-07-15 07:57:57.601627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.601635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.601645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.601653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.601663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.601671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.601680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.601689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.601698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.601706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.601714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1283ea0 is same with the state(5) to be set 00:22:12.980 [2024-07-15 07:57:57.602900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.602913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.602925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.602937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.602947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.602955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.602965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.602974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.602984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.602991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.603010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.603027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.603045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.603063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.603080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.603098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.603115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.603133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.603152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.603172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.603191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.603208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.603231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.603250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.603268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.603288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.603305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.603325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.603344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.603362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.603379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.603399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.603418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.603437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.980 [2024-07-15 07:57:57.603454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.980 [2024-07-15 07:57:57.603464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.603986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.603994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.604003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.604011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.604021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.604029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.604039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.604046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.604056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.604064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.604074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120a490 is same with the state(5) to be set 00:22:12.981 [2024-07-15 07:57:57.605251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.605267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.605280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.605289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.605298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.605307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.605317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.605325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.605334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.605343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.605352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.605360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.605371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.605379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.605389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.981 [2024-07-15 07:57:57.605396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.981 [2024-07-15 07:57:57.605407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.605984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.605992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.606003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.606011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.606021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.606029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.606038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.606046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.606056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.606064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.606073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.606082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.606091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.606099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.606108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.606117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.606126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.606134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.606145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.606154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.606163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.982 [2024-07-15 07:57:57.606172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.982 [2024-07-15 07:57:57.606181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.606189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.606199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.606207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.606216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.606227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.606238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.606245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.606255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.606263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.606273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.606281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.606291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.606302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.606312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.606319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.606329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.606337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.606347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.606355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.606365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.606373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.606385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.606394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.606404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.606411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.606420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113cb70 is same with the state(5) to be set 00:22:12.983 [2024-07-15 07:57:57.607585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.607600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.607612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.607621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.607631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.607638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.607647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.607656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.607667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.607676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.607686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.607694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.607703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.607711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.607721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.607728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.607739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.607747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.607758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.607766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.607776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.607786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.607797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.607805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.607815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.607823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.607835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.607843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.607853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.607861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.607872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.607880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.607890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.607899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.607910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.607917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.607928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.607936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.607945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.607954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.607964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.607971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.607981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.607989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.607998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.608006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.608017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.608026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.608036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.608044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.608054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.608063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.608072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.608081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.608091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.608099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.983 [2024-07-15 07:57:57.608109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.983 [2024-07-15 07:57:57.608117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.984 [2024-07-15 07:57:57.608760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.984 [2024-07-15 07:57:57.608770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113e040 is same with the state(5) to be set 00:22:12.984 [2024-07-15 07:57:57.609934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.609953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.609965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.609973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.609983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.609992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.985 [2024-07-15 07:57:57.610726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.985 [2024-07-15 07:57:57.610736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.610744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.610754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.610762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.610772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.610780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.610794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.610803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.610813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.610821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.610831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.610839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.610858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.610866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.610876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.610884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.610893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.610899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.610907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.610914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.610923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.610930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.610939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.610945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.610954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.610961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.610969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.610977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.610985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.610993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.615019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.615038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.615048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.615055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.615063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.615071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.615080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.615087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.615096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.615103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.615112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.615118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.615127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1285de0 is same with the state(5) to be set 00:22:12.986 [2024-07-15 07:57:57.616121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.616135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.616146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.616154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.616163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.616171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.616180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.616187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.616197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.616204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.616213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.616220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.616232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.616242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.616251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.616258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.616267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.616275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.616284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.616291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.616301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.616308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.616317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.616324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.616333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.616340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.616349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.616356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.616365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.616371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.616381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.616387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.616396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.616403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.616411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.616419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.616427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.616434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.616444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.616451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.616460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.616467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.986 [2024-07-15 07:57:57.616476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.986 [2024-07-15 07:57:57.616483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.616983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.616990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.617000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.617007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.617015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.617022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.617031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.617040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.617048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.617055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.617063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.617070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.617079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.617086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.617094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.617101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.617111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.617118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.617126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.617133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.617141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.987 [2024-07-15 07:57:57.617148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.987 [2024-07-15 07:57:57.617156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12872b0 is same with the state(5) to be set 00:22:12.987 [2024-07-15 07:57:57.618175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.988 [2024-07-15 07:57:57.618754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.988 [2024-07-15 07:57:57.618761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.618769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.618777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.618785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.618791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.618800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.618808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.618817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.618824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.618832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.618840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.618848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.618857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.618865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.618872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.618881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.618887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.618896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.618903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.618911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.618918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.618926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.618933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.618942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.618949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.618958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.618966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.618974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.618982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.618990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.618997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.619005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.619013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.619022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.619028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.619037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.619044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.619054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.619061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.619070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.619077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.619086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.619094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.619102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.619108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.619117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.619124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.619133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.619140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.619148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.619155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.619163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.619170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.619178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.989 [2024-07-15 07:57:57.619185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.989 [2024-07-15 07:57:57.619193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126cef0 is same with the state(5) to be set 00:22:12.989 [2024-07-15 07:57:57.620509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:12.989 [2024-07-15 07:57:57.620531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:12.989 [2024-07-15 07:57:57.620542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:12.989 [2024-07-15 07:57:57.620552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:12.989 [2024-07-15 07:57:57.620614] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:12.989 [2024-07-15 07:57:57.620631] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:12.989 [2024-07-15 07:57:57.620643] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:12.989 [2024-07-15 07:57:57.620733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:12.989 [2024-07-15 07:57:57.620744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:12.989 [2024-07-15 07:57:57.620754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:12.989 [2024-07-15 07:57:57.621062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.989 [2024-07-15 07:57:57.621077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1142c70 with addr=10.0.0.2, port=4420 00:22:12.989 [2024-07-15 07:57:57.621085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1142c70 is same with the state(5) to be set 00:22:12.989 [2024-07-15 07:57:57.621335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.989 [2024-07-15 07:57:57.621347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130e8d0 with addr=10.0.0.2, port=4420 00:22:12.989 [2024-07-15 07:57:57.621354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130e8d0 is same with the state(5) to be set 00:22:12.989 [2024-07-15 07:57:57.621510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.989 [2024-07-15 07:57:57.621521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x117f1d0 with addr=10.0.0.2, port=4420 00:22:12.989 [2024-07-15 07:57:57.621529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117f1d0 is same with the state(5) to be set 00:22:12.989 [2024-07-15 07:57:57.621682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.989 [2024-07-15 07:57:57.621693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc91340 with addr=10.0.0.2, port=4420 00:22:12.989 [2024-07-15 07:57:57.621700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc91340 is same with the state(5) to be set 00:22:12.989 [2024-07-15 07:57:57.623267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:12.989 [2024-07-15 07:57:57.623285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:12.989 [2024-07-15 07:57:57.623477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.989 [2024-07-15 07:57:57.623490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1165190 with addr=10.0.0.2, port=4420 00:22:12.989 [2024-07-15 07:57:57.623497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1165190 is same with the state(5) to be set 00:22:12.989 [2024-07-15 07:57:57.623670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.989 [2024-07-15 07:57:57.623681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1189bf0 with addr=10.0.0.2, port=4420 00:22:12.989 [2024-07-15 07:57:57.623688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1189bf0 is same with the state(5) to be set 00:22:12.989 [2024-07-15 07:57:57.623783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.989 [2024-07-15 07:57:57.623794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f78b0 with addr=10.0.0.2, port=4420 00:22:12.989 [2024-07-15 07:57:57.623801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f78b0 is same with the state(5) to be set 00:22:12.989 [2024-07-15 07:57:57.623813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1142c70 (9): Bad file descriptor 00:22:12.989 [2024-07-15 07:57:57.623823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130e8d0 (9): Bad file descriptor 00:22:12.990 [2024-07-15 07:57:57.623832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117f1d0 (9): Bad file descriptor 00:22:12.990 [2024-07-15 07:57:57.623841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc91340 (9): Bad file descriptor 00:22:12.990 [2024-07-15 07:57:57.623922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.623933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.623945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.623952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.623961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.623967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.623977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.623984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.623993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.990 [2024-07-15 07:57:57.624584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.990 [2024-07-15 07:57:57.624591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.991 [2024-07-15 07:57:57.624599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.991 [2024-07-15 07:57:57.624606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.991 [2024-07-15 07:57:57.624614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.991 [2024-07-15 07:57:57.624621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.991 [2024-07-15 07:57:57.624629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.991 [2024-07-15 07:57:57.624636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.991 [2024-07-15 07:57:57.624644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.991 [2024-07-15 07:57:57.624651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.991 [2024-07-15 07:57:57.624659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.991 [2024-07-15 07:57:57.624666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.991 [2024-07-15 07:57:57.624675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.991 [2024-07-15 07:57:57.624683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.991 [2024-07-15 07:57:57.624692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.991 [2024-07-15 07:57:57.624699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.991 [2024-07-15 07:57:57.624708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.991 [2024-07-15 07:57:57.624714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.991 [2024-07-15 07:57:57.624725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.991 [2024-07-15 07:57:57.624732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.991 [2024-07-15 07:57:57.624740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.991 [2024-07-15 07:57:57.624747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.991 [2024-07-15 07:57:57.624755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.991 [2024-07-15 07:57:57.624761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.991 [2024-07-15 07:57:57.624769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.991 [2024-07-15 07:57:57.624777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.991 [2024-07-15 07:57:57.624785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.991 [2024-07-15 07:57:57.624792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.991 [2024-07-15 07:57:57.624800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.991 [2024-07-15 07:57:57.624808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.991 [2024-07-15 07:57:57.624819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.991 [2024-07-15 07:57:57.624827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.991 [2024-07-15 07:57:57.624835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.991 [2024-07-15 07:57:57.624843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.991 [2024-07-15 07:57:57.624852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.991 [2024-07-15 07:57:57.624859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.991 [2024-07-15 07:57:57.624875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.991 [2024-07-15 07:57:57.624882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.991 [2024-07-15 07:57:57.624891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.991 [2024-07-15 07:57:57.624898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.991 [2024-07-15 07:57:57.624906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.991 [2024-07-15 07:57:57.624914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.991 [2024-07-15 07:57:57.624923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.991 [2024-07-15 07:57:57.624931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.991 [2024-07-15 07:57:57.624940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.991 [2024-07-15 07:57:57.624947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.991 [2024-07-15 07:57:57.624955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126ba60 is same with the state(5) to be set 00:22:12.991 task offset: 27520 on job bdev=Nvme3n1 fails 00:22:12.991 00:22:12.991 Latency(us) 00:22:12.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.991 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.991 Job: Nvme1n1 ended in about 0.82 seconds with error 00:22:12.991 Verification LBA range: start 0x0 length 0x400 00:22:12.991 Nvme1n1 : 0.82 234.23 14.64 78.08 0.00 202691.67 15842.62 203332.56 00:22:12.991 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.991 Job: Nvme2n1 ended in about 0.82 seconds with error 00:22:12.991 Verification LBA range: start 0x0 length 0x400 00:22:12.991 Nvme2n1 : 0.82 155.71 9.73 77.85 0.00 265852.96 17552.25 219745.06 00:22:12.991 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.991 Job: Nvme3n1 ended in about 0.81 seconds with error 00:22:12.991 Verification LBA range: start 0x0 length 0x400 00:22:12.991 Nvme3n1 : 0.81 237.42 14.84 79.14 0.00 191987.53 17666.23 217009.64 00:22:12.991 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.991 Job: Nvme4n1 ended in about 0.82 seconds with error 00:22:12.991 Verification LBA range: start 0x0 length 0x400 00:22:12.991 Nvme4n1 : 0.82 232.90 14.56 77.63 0.00 191912.29 15044.79 208803.39 00:22:12.991 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.991 Job: Nvme5n1 ended in about 0.83 seconds with error 00:22:12.991 Verification LBA range: start 0x0 length 0x400 00:22:12.991 Nvme5n1 : 0.83 154.83 9.68 77.41 0.00 251584.93 20287.67 223392.28 00:22:12.991 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.991 Job: Nvme6n1 ended in about 0.81 seconds with error 00:22:12.991 Verification LBA range: start 0x0 length 0x400 00:22:12.991 Nvme6n1 : 0.81 236.91 14.81 78.97 0.00 180535.87 19945.74 214274.23 00:22:12.991 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.991 Job: Nvme7n1 ended in about 0.83 seconds with error 00:22:12.991 Verification LBA range: start 0x0 length 0x400 00:22:12.991 Nvme7n1 : 0.83 230.48 14.41 76.83 0.00 182268.33 13050.21 220656.86 00:22:12.991 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.991 Job: Nvme8n1 ended in about 0.84 seconds with error 00:22:12.991 Verification LBA range: start 0x0 length 0x400 00:22:12.991 Nvme8n1 : 0.84 153.28 9.58 76.64 0.00 238465.41 14531.90 264423.51 00:22:12.991 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.991 Job: Nvme9n1 ended in about 0.84 seconds with error 00:22:12.991 Verification LBA range: start 0x0 length 0x400 00:22:12.991 Nvme9n1 : 0.84 151.87 9.49 75.93 0.00 235836.70 25302.59 227951.30 00:22:12.991 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.991 Job: Nvme10n1 ended in about 0.84 seconds with error 00:22:12.991 Verification LBA range: start 0x0 length 0x400 00:22:12.991 Nvme10n1 : 0.84 152.91 9.56 76.45 0.00 228625.66 18236.10 237069.36 00:22:12.991 =================================================================================================================== 00:22:12.991 Total : 1940.54 121.28 774.94 0.00 213105.14 13050.21 264423.51 00:22:12.991 [2024-07-15 07:57:57.647669] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:12.991 [2024-07-15 07:57:57.647720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:12.991 [2024-07-15 07:57:57.648057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.991 [2024-07-15 07:57:57.648077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1317050 with addr=10.0.0.2, port=4420 00:22:12.991 [2024-07-15 07:57:57.648087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1317050 is same with the state(5) to be set 00:22:12.991 [2024-07-15 07:57:57.648321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.991 [2024-07-15 07:57:57.648333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1186b30 with addr=10.0.0.2, port=4420 00:22:12.991 [2024-07-15 07:57:57.648341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1186b30 is same with the state(5) to be set 00:22:12.991 [2024-07-15 07:57:57.648355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1165190 (9): Bad file descriptor 00:22:12.991 [2024-07-15 07:57:57.648367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1189bf0 (9): Bad file descriptor 00:22:12.991 [2024-07-15 07:57:57.648376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f78b0 (9): Bad file descriptor 00:22:12.991 [2024-07-15 07:57:57.648384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:12.991 [2024-07-15 07:57:57.648390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:12.991 [2024-07-15 07:57:57.648397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:12.991 [2024-07-15 07:57:57.648412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:12.992 [2024-07-15 07:57:57.648420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:12.992 [2024-07-15 07:57:57.648426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:12.992 [2024-07-15 07:57:57.648436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:12.992 [2024-07-15 07:57:57.648441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:12.992 [2024-07-15 07:57:57.648448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:12.992 [2024-07-15 07:57:57.648458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:12.992 [2024-07-15 07:57:57.648466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:12.992 [2024-07-15 07:57:57.648472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:12.992 [2024-07-15 07:57:57.648588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.992 [2024-07-15 07:57:57.648600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.992 [2024-07-15 07:57:57.648606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.992 [2024-07-15 07:57:57.648612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.992 [2024-07-15 07:57:57.648882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.992 [2024-07-15 07:57:57.648894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130e0d0 with addr=10.0.0.2, port=4420 00:22:12.992 [2024-07-15 07:57:57.648902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130e0d0 is same with the state(5) to be set 00:22:12.992 [2024-07-15 07:57:57.648911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1317050 (9): Bad file descriptor 00:22:12.992 [2024-07-15 07:57:57.648924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1186b30 (9): Bad file descriptor 00:22:12.992 [2024-07-15 07:57:57.648932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:12.992 [2024-07-15 07:57:57.648938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:12.992 [2024-07-15 07:57:57.648944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:12.992 [2024-07-15 07:57:57.648954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:12.992 [2024-07-15 07:57:57.648961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:12.992 [2024-07-15 07:57:57.648967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:12.992 [2024-07-15 07:57:57.648977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:12.992 [2024-07-15 07:57:57.648982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:12.992 [2024-07-15 07:57:57.648989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:12.992 [2024-07-15 07:57:57.649031] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:12.992 [2024-07-15 07:57:57.649042] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:12.992 [2024-07-15 07:57:57.649051] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:12.992 [2024-07-15 07:57:57.649061] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:12.992 [2024-07-15 07:57:57.649070] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:12.992 [2024-07-15 07:57:57.649341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.992 [2024-07-15 07:57:57.649351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.992 [2024-07-15 07:57:57.649357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.992 [2024-07-15 07:57:57.649374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130e0d0 (9): Bad file descriptor 00:22:12.992 [2024-07-15 07:57:57.649383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:12.992 [2024-07-15 07:57:57.649388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:12.992 [2024-07-15 07:57:57.649395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:12.992 [2024-07-15 07:57:57.649404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:12.992 [2024-07-15 07:57:57.649411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:12.992 [2024-07-15 07:57:57.649418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:12.992 [2024-07-15 07:57:57.649455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:12.992 [2024-07-15 07:57:57.649466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:12.992 [2024-07-15 07:57:57.649474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:12.992 [2024-07-15 07:57:57.649482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:12.992 [2024-07-15 07:57:57.649490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.992 [2024-07-15 07:57:57.649497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.992 [2024-07-15 07:57:57.649525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:12.992 [2024-07-15 07:57:57.649531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:12.992 [2024-07-15 07:57:57.649537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:12.992 [2024-07-15 07:57:57.649566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.992 [2024-07-15 07:57:57.649797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.992 [2024-07-15 07:57:57.649809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc91340 with addr=10.0.0.2, port=4420 00:22:12.992 [2024-07-15 07:57:57.649817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc91340 is same with the state(5) to be set 00:22:12.992 [2024-07-15 07:57:57.650038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.992 [2024-07-15 07:57:57.650049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x117f1d0 with addr=10.0.0.2, port=4420 00:22:12.992 [2024-07-15 07:57:57.650055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117f1d0 is same with the state(5) to be set 00:22:12.992 [2024-07-15 07:57:57.650203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.992 [2024-07-15 07:57:57.650213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130e8d0 with addr=10.0.0.2, port=4420 00:22:12.992 [2024-07-15 07:57:57.650219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130e8d0 is same with the state(5) to be set 00:22:12.992 [2024-07-15 07:57:57.650366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.992 [2024-07-15 07:57:57.650377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1142c70 with addr=10.0.0.2, port=4420 00:22:12.992 [2024-07-15 07:57:57.650384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1142c70 is same with the state(5) to be set 00:22:12.992 [2024-07-15 07:57:57.650410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc91340 (9): Bad file descriptor 00:22:12.992 [2024-07-15 07:57:57.650419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117f1d0 (9): Bad file descriptor 00:22:12.992 [2024-07-15 07:57:57.650428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130e8d0 (9): Bad file descriptor 00:22:12.992 [2024-07-15 07:57:57.650436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1142c70 (9): Bad file descriptor 00:22:12.992 [2024-07-15 07:57:57.650460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:12.992 [2024-07-15 07:57:57.650467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:12.992 [2024-07-15 07:57:57.650474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:12.992 [2024-07-15 07:57:57.650483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:12.992 [2024-07-15 07:57:57.650489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:12.992 [2024-07-15 07:57:57.650495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:12.992 [2024-07-15 07:57:57.650503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:12.992 [2024-07-15 07:57:57.650509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:12.992 [2024-07-15 07:57:57.650515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:12.992 [2024-07-15 07:57:57.650523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:12.992 [2024-07-15 07:57:57.650533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:12.992 [2024-07-15 07:57:57.650539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:12.992 [2024-07-15 07:57:57.650563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.992 [2024-07-15 07:57:57.650569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.992 [2024-07-15 07:57:57.650575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.992 [2024-07-15 07:57:57.650581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:13.251 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:22:13.251 07:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:22:14.630 07:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3319015 00:22:14.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3319015) - No such process 00:22:14.630 07:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:22:14.630 07:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:22:14.630 07:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:14.630 07:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:14.630 07:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:14.630 07:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:14.630 07:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:14.630 07:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:22:14.630 07:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:14.630 07:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:22:14.630 07:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:14.630 07:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:14.630 rmmod nvme_tcp 00:22:14.630 rmmod nvme_fabrics 00:22:14.630 rmmod nvme_keyring 00:22:14.630 07:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:14.630 07:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:22:14.630 07:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:22:14.630 07:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:14.630 07:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:14.630 07:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:14.630 07:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:14.630 07:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:14.630 07:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:14.630 07:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.630 07:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.630 07:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.536 07:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:16.536 00:22:16.536 real 0m7.989s 00:22:16.536 user 0m20.138s 00:22:16.536 sys 0m1.288s 00:22:16.536 07:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:16.536 07:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:16.536 ************************************ 00:22:16.536 END TEST nvmf_shutdown_tc3 00:22:16.536 ************************************ 00:22:16.536 07:58:01 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:22:16.536 07:58:01 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:22:16.536 00:22:16.536 real 0m31.419s 00:22:16.536 user 1m18.038s 00:22:16.536 sys 0m8.473s 00:22:16.536 07:58:01 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:16.536 07:58:01 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:16.536 ************************************ 00:22:16.536 END TEST nvmf_shutdown 00:22:16.536 ************************************ 00:22:16.536 07:58:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:16.536 07:58:01 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:22:16.536 07:58:01 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:16.536 07:58:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:16.536 07:58:01 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:22:16.536 07:58:01 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:16.536 07:58:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:16.536 07:58:01 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:22:16.536 07:58:01 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:16.536 07:58:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:16.536 07:58:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:16.536 07:58:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:16.851 ************************************ 00:22:16.851 START TEST nvmf_multicontroller 00:22:16.851 ************************************ 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:16.851 * Looking for test storage... 00:22:16.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:22:16.851 07:58:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:22.126 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:22.126 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:22.126 Found net devices under 0000:86:00.0: cvl_0_0 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:22.126 Found net devices under 0000:86:00.1: cvl_0_1 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:22.126 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:22.385 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:22.385 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:22.385 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:22.385 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:22.385 07:58:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:22.385 07:58:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:22.385 07:58:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:22.385 07:58:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:22.385 07:58:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:22.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:22.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:22:22.385 00:22:22.385 --- 10.0.0.2 ping statistics --- 00:22:22.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.385 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:22:22.385 07:58:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:22.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:22.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:22:22.385 00:22:22.385 --- 10.0.0.1 ping statistics --- 00:22:22.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.385 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:22:22.385 07:58:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:22.385 07:58:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:22:22.385 07:58:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:22.385 07:58:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:22.385 07:58:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:22.385 07:58:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:22.385 07:58:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:22.385 07:58:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:22.385 07:58:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:22.645 07:58:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:22.645 07:58:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:22.645 07:58:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:22.645 07:58:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:22.645 07:58:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3323267 00:22:22.645 07:58:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:22.645 07:58:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3323267 00:22:22.645 07:58:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3323267 ']' 00:22:22.645 07:58:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.645 07:58:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:22.645 07:58:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.645 07:58:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:22.645 07:58:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:22.645 [2024-07-15 07:58:07.202500] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:22:22.645 [2024-07-15 07:58:07.202544] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.645 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.645 [2024-07-15 07:58:07.273469] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:22.645 [2024-07-15 07:58:07.344778] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.645 [2024-07-15 07:58:07.344819] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.645 [2024-07-15 07:58:07.344826] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.645 [2024-07-15 07:58:07.344832] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.645 [2024-07-15 07:58:07.344837] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.645 [2024-07-15 07:58:07.344954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.645 [2024-07-15 07:58:07.345037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.645 [2024-07-15 07:58:07.345039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.581 [2024-07-15 07:58:08.049269] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.581 Malloc0 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.581 [2024-07-15 07:58:08.116203] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.581 [2024-07-15 07:58:08.124137] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.581 Malloc1 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3323313 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3323313 /var/tmp/bdevperf.sock 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3323313 ']' 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:23.581 07:58:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:24.518 NVMe0n1 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.518 1 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:24.518 request: 00:22:24.518 { 00:22:24.518 "name": "NVMe0", 00:22:24.518 "trtype": "tcp", 00:22:24.518 "traddr": "10.0.0.2", 00:22:24.518 "adrfam": "ipv4", 00:22:24.518 "trsvcid": "4420", 00:22:24.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.518 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:24.518 "hostaddr": "10.0.0.2", 00:22:24.518 "hostsvcid": "60000", 00:22:24.518 "prchk_reftag": false, 00:22:24.518 "prchk_guard": false, 00:22:24.518 "hdgst": false, 00:22:24.518 "ddgst": false, 00:22:24.518 "method": "bdev_nvme_attach_controller", 00:22:24.518 "req_id": 1 00:22:24.518 } 00:22:24.518 Got JSON-RPC error response 00:22:24.518 response: 00:22:24.518 { 00:22:24.518 "code": -114, 00:22:24.518 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:24.518 } 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:24.518 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:24.777 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:24.777 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:24.777 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:24.777 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:24.777 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.777 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:24.777 request: 00:22:24.777 { 00:22:24.777 "name": "NVMe0", 00:22:24.777 "trtype": "tcp", 00:22:24.777 "traddr": "10.0.0.2", 00:22:24.777 "adrfam": "ipv4", 00:22:24.777 "trsvcid": "4420", 00:22:24.777 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:24.777 "hostaddr": "10.0.0.2", 00:22:24.777 "hostsvcid": "60000", 00:22:24.777 "prchk_reftag": false, 00:22:24.777 "prchk_guard": false, 00:22:24.777 "hdgst": false, 00:22:24.777 "ddgst": false, 00:22:24.777 "method": "bdev_nvme_attach_controller", 00:22:24.777 "req_id": 1 00:22:24.777 } 00:22:24.777 Got JSON-RPC error response 00:22:24.777 response: 00:22:24.777 { 00:22:24.777 "code": -114, 00:22:24.777 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:24.777 } 00:22:24.777 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:24.777 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:24.777 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:24.777 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:24.777 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:24.777 07:58:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:24.777 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:24.777 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:24.777 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:24.777 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:24.777 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:24.777 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:24.778 request: 00:22:24.778 { 00:22:24.778 "name": "NVMe0", 00:22:24.778 "trtype": "tcp", 00:22:24.778 "traddr": "10.0.0.2", 00:22:24.778 "adrfam": "ipv4", 00:22:24.778 "trsvcid": "4420", 00:22:24.778 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.778 "hostaddr": "10.0.0.2", 00:22:24.778 "hostsvcid": "60000", 00:22:24.778 "prchk_reftag": false, 00:22:24.778 "prchk_guard": false, 00:22:24.778 "hdgst": false, 00:22:24.778 "ddgst": false, 00:22:24.778 "multipath": "disable", 00:22:24.778 "method": "bdev_nvme_attach_controller", 00:22:24.778 "req_id": 1 00:22:24.778 } 00:22:24.778 Got JSON-RPC error response 00:22:24.778 response: 00:22:24.778 { 00:22:24.778 "code": -114, 00:22:24.778 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:22:24.778 } 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:24.778 request: 00:22:24.778 { 00:22:24.778 "name": "NVMe0", 00:22:24.778 "trtype": "tcp", 00:22:24.778 "traddr": "10.0.0.2", 00:22:24.778 "adrfam": "ipv4", 00:22:24.778 "trsvcid": "4420", 00:22:24.778 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.778 "hostaddr": "10.0.0.2", 00:22:24.778 "hostsvcid": "60000", 00:22:24.778 "prchk_reftag": false, 00:22:24.778 "prchk_guard": false, 00:22:24.778 "hdgst": false, 00:22:24.778 "ddgst": false, 00:22:24.778 "multipath": "failover", 00:22:24.778 "method": "bdev_nvme_attach_controller", 00:22:24.778 "req_id": 1 00:22:24.778 } 00:22:24.778 Got JSON-RPC error response 00:22:24.778 response: 00:22:24.778 { 00:22:24.778 "code": -114, 00:22:24.778 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:24.778 } 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.778 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:25.037 00:22:25.037 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.037 07:58:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:25.037 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.037 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:25.037 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.037 07:58:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:25.037 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.037 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:25.037 00:22:25.037 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.037 07:58:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:25.037 07:58:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:25.037 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.037 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:25.037 07:58:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.037 07:58:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:25.037 07:58:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:26.430 0 00:22:26.430 07:58:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:26.430 07:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.430 07:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:26.430 07:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.430 07:58:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3323313 00:22:26.430 07:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3323313 ']' 00:22:26.430 07:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3323313 00:22:26.430 07:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:22:26.430 07:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:26.430 07:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3323313 00:22:26.430 07:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:26.430 07:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:26.430 07:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3323313' 00:22:26.430 killing process with pid 3323313 00:22:26.430 07:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3323313 00:22:26.430 07:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3323313 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:22:26.430 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:26.430 [2024-07-15 07:58:08.228206] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:22:26.430 [2024-07-15 07:58:08.228258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3323313 ] 00:22:26.430 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.430 [2024-07-15 07:58:08.298010] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.430 [2024-07-15 07:58:08.371892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.430 [2024-07-15 07:58:09.673001] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name bf05853c-05e4-404f-8e39-145d87be0d88 already exists 00:22:26.430 [2024-07-15 07:58:09.673029] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:bf05853c-05e4-404f-8e39-145d87be0d88 alias for bdev NVMe1n1 00:22:26.430 [2024-07-15 07:58:09.673037] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:26.430 Running I/O for 1 seconds... 00:22:26.430 00:22:26.430 Latency(us) 00:22:26.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.430 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:26.430 NVMe0n1 : 1.00 25001.17 97.66 0.00 0.00 5113.53 1531.55 11055.64 00:22:26.430 =================================================================================================================== 00:22:26.430 Total : 25001.17 97.66 0.00 0.00 5113.53 1531.55 11055.64 00:22:26.430 Received shutdown signal, test time was about 1.000000 seconds 00:22:26.430 00:22:26.430 Latency(us) 00:22:26.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.430 =================================================================================================================== 00:22:26.430 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:26.430 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:26.430 rmmod nvme_tcp 00:22:26.430 rmmod nvme_fabrics 00:22:26.430 rmmod nvme_keyring 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3323267 ']' 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3323267 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3323267 ']' 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3323267 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:26.430 07:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3323267 00:22:26.688 07:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:26.688 07:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:26.688 07:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3323267' 00:22:26.688 killing process with pid 3323267 00:22:26.688 07:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3323267 00:22:26.688 07:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3323267 00:22:26.688 07:58:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:26.689 07:58:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:26.689 07:58:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:26.689 07:58:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:26.689 07:58:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:26.689 07:58:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.689 07:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.689 07:58:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.223 07:58:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:29.224 00:22:29.224 real 0m12.194s 00:22:29.224 user 0m16.945s 00:22:29.224 sys 0m5.049s 00:22:29.224 07:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:29.224 07:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:29.224 ************************************ 00:22:29.224 END TEST nvmf_multicontroller 00:22:29.224 ************************************ 00:22:29.224 07:58:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:29.224 07:58:13 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:29.224 07:58:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:29.224 07:58:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:29.224 07:58:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:29.224 ************************************ 00:22:29.224 START TEST nvmf_aer 00:22:29.224 ************************************ 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:29.224 * Looking for test storage... 00:22:29.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:22:29.224 07:58:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.494 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:34.494 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:22:34.494 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:34.494 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:34.494 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:34.494 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:34.494 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:34.494 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:22:34.494 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:34.494 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:22:34.494 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:22:34.494 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:22:34.494 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:22:34.494 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:22:34.494 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:22:34.494 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:34.494 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:34.494 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:34.494 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:34.494 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:34.494 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:34.494 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:34.495 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:34.495 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:34.495 Found net devices under 0000:86:00.0: cvl_0_0 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:34.495 Found net devices under 0000:86:00.1: cvl_0_1 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:34.495 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:34.754 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:34.754 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:34.754 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:34.754 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:34.754 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:34.754 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:34.755 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:34.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:34.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:22:34.755 00:22:34.755 --- 10.0.0.2 ping statistics --- 00:22:34.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.755 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:22:34.755 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:34.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:34.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:22:34.755 00:22:34.755 --- 10.0.0.1 ping statistics --- 00:22:34.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.755 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:22:34.755 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:34.755 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:22:34.755 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:34.755 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:34.755 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:34.755 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:34.755 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:34.755 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:34.755 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:34.755 07:58:19 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:34.755 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:34.755 07:58:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:34.755 07:58:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.755 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3327291 00:22:34.755 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3327291 00:22:34.755 07:58:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:34.755 07:58:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 3327291 ']' 00:22:34.755 07:58:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.755 07:58:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:34.755 07:58:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.755 07:58:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:34.755 07:58:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:34.755 [2024-07-15 07:58:19.465700] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:22:34.755 [2024-07-15 07:58:19.465751] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.755 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.014 [2024-07-15 07:58:19.534753] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:35.014 [2024-07-15 07:58:19.616686] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.014 [2024-07-15 07:58:19.616720] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.014 [2024-07-15 07:58:19.616727] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.014 [2024-07-15 07:58:19.616734] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.014 [2024-07-15 07:58:19.616740] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.014 [2024-07-15 07:58:19.616801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.014 [2024-07-15 07:58:19.616825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.014 [2024-07-15 07:58:19.616850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.014 [2024-07-15 07:58:19.616852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:35.583 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:35.583 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:22:35.583 07:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:35.583 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:35.583 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:35.583 07:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.583 07:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:35.583 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.583 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:35.583 [2024-07-15 07:58:20.316373] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.583 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.583 07:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:35.583 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.583 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:35.842 Malloc0 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:35.842 [2024-07-15 07:58:20.368027] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:35.842 [ 00:22:35.842 { 00:22:35.842 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:35.842 "subtype": "Discovery", 00:22:35.842 "listen_addresses": [], 00:22:35.842 "allow_any_host": true, 00:22:35.842 "hosts": [] 00:22:35.842 }, 00:22:35.842 { 00:22:35.842 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:35.842 "subtype": "NVMe", 00:22:35.842 "listen_addresses": [ 00:22:35.842 { 00:22:35.842 "trtype": "TCP", 00:22:35.842 "adrfam": "IPv4", 00:22:35.842 "traddr": "10.0.0.2", 00:22:35.842 "trsvcid": "4420" 00:22:35.842 } 00:22:35.842 ], 00:22:35.842 "allow_any_host": true, 00:22:35.842 "hosts": [], 00:22:35.842 "serial_number": "SPDK00000000000001", 00:22:35.842 "model_number": "SPDK bdev Controller", 00:22:35.842 "max_namespaces": 2, 00:22:35.842 "min_cntlid": 1, 00:22:35.842 "max_cntlid": 65519, 00:22:35.842 "namespaces": [ 00:22:35.842 { 00:22:35.842 "nsid": 1, 00:22:35.842 "bdev_name": "Malloc0", 00:22:35.842 "name": "Malloc0", 00:22:35.842 "nguid": "1C66EFEF20924EF3B46B73D807F49183", 00:22:35.842 "uuid": "1c66efef-2092-4ef3-b46b-73d807f49183" 00:22:35.842 } 00:22:35.842 ] 00:22:35.842 } 00:22:35.842 ] 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3327540 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:35.842 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:22:35.842 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:36.101 Malloc1 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:36.101 Asynchronous Event Request test 00:22:36.101 Attaching to 10.0.0.2 00:22:36.101 Attached to 10.0.0.2 00:22:36.101 Registering asynchronous event callbacks... 00:22:36.101 Starting namespace attribute notice tests for all controllers... 00:22:36.101 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:36.101 aer_cb - Changed Namespace 00:22:36.101 Cleaning up... 00:22:36.101 [ 00:22:36.101 { 00:22:36.101 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:36.101 "subtype": "Discovery", 00:22:36.101 "listen_addresses": [], 00:22:36.101 "allow_any_host": true, 00:22:36.101 "hosts": [] 00:22:36.101 }, 00:22:36.101 { 00:22:36.101 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.101 "subtype": "NVMe", 00:22:36.101 "listen_addresses": [ 00:22:36.101 { 00:22:36.101 "trtype": "TCP", 00:22:36.101 "adrfam": "IPv4", 00:22:36.101 "traddr": "10.0.0.2", 00:22:36.101 "trsvcid": "4420" 00:22:36.101 } 00:22:36.101 ], 00:22:36.101 "allow_any_host": true, 00:22:36.101 "hosts": [], 00:22:36.101 "serial_number": "SPDK00000000000001", 00:22:36.101 "model_number": "SPDK bdev Controller", 00:22:36.101 "max_namespaces": 2, 00:22:36.101 "min_cntlid": 1, 00:22:36.101 "max_cntlid": 65519, 00:22:36.101 "namespaces": [ 00:22:36.101 { 00:22:36.101 "nsid": 1, 00:22:36.101 "bdev_name": "Malloc0", 00:22:36.101 "name": "Malloc0", 00:22:36.101 "nguid": "1C66EFEF20924EF3B46B73D807F49183", 00:22:36.101 "uuid": "1c66efef-2092-4ef3-b46b-73d807f49183" 00:22:36.101 }, 00:22:36.101 { 00:22:36.101 "nsid": 2, 00:22:36.101 "bdev_name": "Malloc1", 00:22:36.101 "name": "Malloc1", 00:22:36.101 "nguid": "05AEAB04B3F240FBA89F02C6C9905050", 00:22:36.101 "uuid": "05aeab04-b3f2-40fb-a89f-02c6c9905050" 00:22:36.101 } 00:22:36.101 ] 00:22:36.101 } 00:22:36.101 ] 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3327540 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:36.101 rmmod nvme_tcp 00:22:36.101 rmmod nvme_fabrics 00:22:36.101 rmmod nvme_keyring 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3327291 ']' 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3327291 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 3327291 ']' 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 3327291 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3327291 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3327291' 00:22:36.101 killing process with pid 3327291 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 3327291 00:22:36.101 07:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 3327291 00:22:36.361 07:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:36.361 07:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:36.361 07:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:36.361 07:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:36.361 07:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:36.361 07:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.361 07:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:36.361 07:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.897 07:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:38.897 00:22:38.897 real 0m9.535s 00:22:38.897 user 0m7.325s 00:22:38.897 sys 0m4.692s 00:22:38.897 07:58:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:38.897 07:58:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.897 ************************************ 00:22:38.897 END TEST nvmf_aer 00:22:38.897 ************************************ 00:22:38.897 07:58:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:38.897 07:58:23 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:38.897 07:58:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:38.897 07:58:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:38.897 07:58:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:38.897 ************************************ 00:22:38.897 START TEST nvmf_async_init 00:22:38.897 ************************************ 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:38.897 * Looking for test storage... 00:22:38.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=dd80c75127614bee981f585a805795dd 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:22:38.897 07:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:44.234 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:44.234 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:44.234 Found net devices under 0000:86:00.0: cvl_0_0 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:44.234 Found net devices under 0000:86:00.1: cvl_0_1 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.234 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.235 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.235 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:44.235 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.235 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.235 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.235 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:44.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:22:44.235 00:22:44.235 --- 10.0.0.2 ping statistics --- 00:22:44.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.235 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:22:44.235 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:22:44.235 00:22:44.235 --- 10.0.0.1 ping statistics --- 00:22:44.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.235 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:22:44.235 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.235 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:44.235 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:44.235 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.235 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:44.235 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:44.235 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.235 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:44.235 07:58:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:44.493 07:58:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:44.493 07:58:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:44.493 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:44.493 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.493 07:58:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3331056 00:22:44.493 07:58:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3331056 00:22:44.493 07:58:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:44.494 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 3331056 ']' 00:22:44.494 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.494 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:44.494 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.494 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:44.494 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:44.494 [2024-07-15 07:58:29.058206] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:22:44.494 [2024-07-15 07:58:29.058259] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.494 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.494 [2024-07-15 07:58:29.130264] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.494 [2024-07-15 07:58:29.209446] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.494 [2024-07-15 07:58:29.209479] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.494 [2024-07-15 07:58:29.209486] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.494 [2024-07-15 07:58:29.209492] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.494 [2024-07-15 07:58:29.209497] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.494 [2024-07-15 07:58:29.209515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.429 [2024-07-15 07:58:29.908850] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.429 null0 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g dd80c75127614bee981f585a805795dd 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.429 [2024-07-15 07:58:29.949043] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.429 07:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.691 nvme0n1 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.691 [ 00:22:45.691 { 00:22:45.691 "name": "nvme0n1", 00:22:45.691 "aliases": [ 00:22:45.691 "dd80c751-2761-4bee-981f-585a805795dd" 00:22:45.691 ], 00:22:45.691 "product_name": "NVMe disk", 00:22:45.691 "block_size": 512, 00:22:45.691 "num_blocks": 2097152, 00:22:45.691 "uuid": "dd80c751-2761-4bee-981f-585a805795dd", 00:22:45.691 "assigned_rate_limits": { 00:22:45.691 "rw_ios_per_sec": 0, 00:22:45.691 "rw_mbytes_per_sec": 0, 00:22:45.691 "r_mbytes_per_sec": 0, 00:22:45.691 "w_mbytes_per_sec": 0 00:22:45.691 }, 00:22:45.691 "claimed": false, 00:22:45.691 "zoned": false, 00:22:45.691 "supported_io_types": { 00:22:45.691 "read": true, 00:22:45.691 "write": true, 00:22:45.691 "unmap": false, 00:22:45.691 "flush": true, 00:22:45.691 "reset": true, 00:22:45.691 "nvme_admin": true, 00:22:45.691 "nvme_io": true, 00:22:45.691 "nvme_io_md": false, 00:22:45.691 "write_zeroes": true, 00:22:45.691 "zcopy": false, 00:22:45.691 "get_zone_info": false, 00:22:45.691 "zone_management": false, 00:22:45.691 "zone_append": false, 00:22:45.691 "compare": true, 00:22:45.691 "compare_and_write": true, 00:22:45.691 "abort": true, 00:22:45.691 "seek_hole": false, 00:22:45.691 "seek_data": false, 00:22:45.691 "copy": true, 00:22:45.691 "nvme_iov_md": false 00:22:45.691 }, 00:22:45.691 "memory_domains": [ 00:22:45.691 { 00:22:45.691 "dma_device_id": "system", 00:22:45.691 "dma_device_type": 1 00:22:45.691 } 00:22:45.691 ], 00:22:45.691 "driver_specific": { 00:22:45.691 "nvme": [ 00:22:45.691 { 00:22:45.691 "trid": { 00:22:45.691 "trtype": "TCP", 00:22:45.691 "adrfam": "IPv4", 00:22:45.691 "traddr": "10.0.0.2", 00:22:45.691 "trsvcid": "4420", 00:22:45.691 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:45.691 }, 00:22:45.691 "ctrlr_data": { 00:22:45.691 "cntlid": 1, 00:22:45.691 "vendor_id": "0x8086", 00:22:45.691 "model_number": "SPDK bdev Controller", 00:22:45.691 "serial_number": "00000000000000000000", 00:22:45.691 "firmware_revision": "24.09", 00:22:45.691 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:45.691 "oacs": { 00:22:45.691 "security": 0, 00:22:45.691 "format": 0, 00:22:45.691 "firmware": 0, 00:22:45.691 "ns_manage": 0 00:22:45.691 }, 00:22:45.691 "multi_ctrlr": true, 00:22:45.691 "ana_reporting": false 00:22:45.691 }, 00:22:45.691 "vs": { 00:22:45.691 "nvme_version": "1.3" 00:22:45.691 }, 00:22:45.691 "ns_data": { 00:22:45.691 "id": 1, 00:22:45.691 "can_share": true 00:22:45.691 } 00:22:45.691 } 00:22:45.691 ], 00:22:45.691 "mp_policy": "active_passive" 00:22:45.691 } 00:22:45.691 } 00:22:45.691 ] 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.691 [2024-07-15 07:58:30.209609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:45.691 [2024-07-15 07:58:30.209665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf5250 (9): Bad file descriptor 00:22:45.691 [2024-07-15 07:58:30.341305] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.691 [ 00:22:45.691 { 00:22:45.691 "name": "nvme0n1", 00:22:45.691 "aliases": [ 00:22:45.691 "dd80c751-2761-4bee-981f-585a805795dd" 00:22:45.691 ], 00:22:45.691 "product_name": "NVMe disk", 00:22:45.691 "block_size": 512, 00:22:45.691 "num_blocks": 2097152, 00:22:45.691 "uuid": "dd80c751-2761-4bee-981f-585a805795dd", 00:22:45.691 "assigned_rate_limits": { 00:22:45.691 "rw_ios_per_sec": 0, 00:22:45.691 "rw_mbytes_per_sec": 0, 00:22:45.691 "r_mbytes_per_sec": 0, 00:22:45.691 "w_mbytes_per_sec": 0 00:22:45.691 }, 00:22:45.691 "claimed": false, 00:22:45.691 "zoned": false, 00:22:45.691 "supported_io_types": { 00:22:45.691 "read": true, 00:22:45.691 "write": true, 00:22:45.691 "unmap": false, 00:22:45.691 "flush": true, 00:22:45.691 "reset": true, 00:22:45.691 "nvme_admin": true, 00:22:45.691 "nvme_io": true, 00:22:45.691 "nvme_io_md": false, 00:22:45.691 "write_zeroes": true, 00:22:45.691 "zcopy": false, 00:22:45.691 "get_zone_info": false, 00:22:45.691 "zone_management": false, 00:22:45.691 "zone_append": false, 00:22:45.691 "compare": true, 00:22:45.691 "compare_and_write": true, 00:22:45.691 "abort": true, 00:22:45.691 "seek_hole": false, 00:22:45.691 "seek_data": false, 00:22:45.691 "copy": true, 00:22:45.691 "nvme_iov_md": false 00:22:45.691 }, 00:22:45.691 "memory_domains": [ 00:22:45.691 { 00:22:45.691 "dma_device_id": "system", 00:22:45.691 "dma_device_type": 1 00:22:45.691 } 00:22:45.691 ], 00:22:45.691 "driver_specific": { 00:22:45.691 "nvme": [ 00:22:45.691 { 00:22:45.691 "trid": { 00:22:45.691 "trtype": "TCP", 00:22:45.691 "adrfam": "IPv4", 00:22:45.691 "traddr": "10.0.0.2", 00:22:45.691 "trsvcid": "4420", 00:22:45.691 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:45.691 }, 00:22:45.691 "ctrlr_data": { 00:22:45.691 "cntlid": 2, 00:22:45.691 "vendor_id": "0x8086", 00:22:45.691 "model_number": "SPDK bdev Controller", 00:22:45.691 "serial_number": "00000000000000000000", 00:22:45.691 "firmware_revision": "24.09", 00:22:45.691 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:45.691 "oacs": { 00:22:45.691 "security": 0, 00:22:45.691 "format": 0, 00:22:45.691 "firmware": 0, 00:22:45.691 "ns_manage": 0 00:22:45.691 }, 00:22:45.691 "multi_ctrlr": true, 00:22:45.691 "ana_reporting": false 00:22:45.691 }, 00:22:45.691 "vs": { 00:22:45.691 "nvme_version": "1.3" 00:22:45.691 }, 00:22:45.691 "ns_data": { 00:22:45.691 "id": 1, 00:22:45.691 "can_share": true 00:22:45.691 } 00:22:45.691 } 00:22:45.691 ], 00:22:45.691 "mp_policy": "active_passive" 00:22:45.691 } 00:22:45.691 } 00:22:45.691 ] 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.t1NONCjvfe 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.t1NONCjvfe 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.691 [2024-07-15 07:58:30.402192] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:45.691 [2024-07-15 07:58:30.402311] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.691 07:58:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.t1NONCjvfe 00:22:45.692 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.692 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.692 [2024-07-15 07:58:30.410209] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:45.692 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.692 07:58:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.t1NONCjvfe 00:22:45.692 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.692 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.692 [2024-07-15 07:58:30.422261] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:45.692 [2024-07-15 07:58:30.422298] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:45.951 nvme0n1 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.951 [ 00:22:45.951 { 00:22:45.951 "name": "nvme0n1", 00:22:45.951 "aliases": [ 00:22:45.951 "dd80c751-2761-4bee-981f-585a805795dd" 00:22:45.951 ], 00:22:45.951 "product_name": "NVMe disk", 00:22:45.951 "block_size": 512, 00:22:45.951 "num_blocks": 2097152, 00:22:45.951 "uuid": "dd80c751-2761-4bee-981f-585a805795dd", 00:22:45.951 "assigned_rate_limits": { 00:22:45.951 "rw_ios_per_sec": 0, 00:22:45.951 "rw_mbytes_per_sec": 0, 00:22:45.951 "r_mbytes_per_sec": 0, 00:22:45.951 "w_mbytes_per_sec": 0 00:22:45.951 }, 00:22:45.951 "claimed": false, 00:22:45.951 "zoned": false, 00:22:45.951 "supported_io_types": { 00:22:45.951 "read": true, 00:22:45.951 "write": true, 00:22:45.951 "unmap": false, 00:22:45.951 "flush": true, 00:22:45.951 "reset": true, 00:22:45.951 "nvme_admin": true, 00:22:45.951 "nvme_io": true, 00:22:45.951 "nvme_io_md": false, 00:22:45.951 "write_zeroes": true, 00:22:45.951 "zcopy": false, 00:22:45.951 "get_zone_info": false, 00:22:45.951 "zone_management": false, 00:22:45.951 "zone_append": false, 00:22:45.951 "compare": true, 00:22:45.951 "compare_and_write": true, 00:22:45.951 "abort": true, 00:22:45.951 "seek_hole": false, 00:22:45.951 "seek_data": false, 00:22:45.951 "copy": true, 00:22:45.951 "nvme_iov_md": false 00:22:45.951 }, 00:22:45.951 "memory_domains": [ 00:22:45.951 { 00:22:45.951 "dma_device_id": "system", 00:22:45.951 "dma_device_type": 1 00:22:45.951 } 00:22:45.951 ], 00:22:45.951 "driver_specific": { 00:22:45.951 "nvme": [ 00:22:45.951 { 00:22:45.951 "trid": { 00:22:45.951 "trtype": "TCP", 00:22:45.951 "adrfam": "IPv4", 00:22:45.951 "traddr": "10.0.0.2", 00:22:45.951 "trsvcid": "4421", 00:22:45.951 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:45.951 }, 00:22:45.951 "ctrlr_data": { 00:22:45.951 "cntlid": 3, 00:22:45.951 "vendor_id": "0x8086", 00:22:45.951 "model_number": "SPDK bdev Controller", 00:22:45.951 "serial_number": "00000000000000000000", 00:22:45.951 "firmware_revision": "24.09", 00:22:45.951 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:45.951 "oacs": { 00:22:45.951 "security": 0, 00:22:45.951 "format": 0, 00:22:45.951 "firmware": 0, 00:22:45.951 "ns_manage": 0 00:22:45.951 }, 00:22:45.951 "multi_ctrlr": true, 00:22:45.951 "ana_reporting": false 00:22:45.951 }, 00:22:45.951 "vs": { 00:22:45.951 "nvme_version": "1.3" 00:22:45.951 }, 00:22:45.951 "ns_data": { 00:22:45.951 "id": 1, 00:22:45.951 "can_share": true 00:22:45.951 } 00:22:45.951 } 00:22:45.951 ], 00:22:45.951 "mp_policy": "active_passive" 00:22:45.951 } 00:22:45.951 } 00:22:45.951 ] 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.t1NONCjvfe 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:45.951 rmmod nvme_tcp 00:22:45.951 rmmod nvme_fabrics 00:22:45.951 rmmod nvme_keyring 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3331056 ']' 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3331056 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 3331056 ']' 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 3331056 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3331056 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3331056' 00:22:45.951 killing process with pid 3331056 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 3331056 00:22:45.951 [2024-07-15 07:58:30.634324] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:45.951 [2024-07-15 07:58:30.634347] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:45.951 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 3331056 00:22:46.211 07:58:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:46.211 07:58:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:46.211 07:58:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:46.211 07:58:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:46.211 07:58:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:46.211 07:58:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.211 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.211 07:58:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.115 07:58:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:48.400 00:22:48.400 real 0m9.696s 00:22:48.400 user 0m3.639s 00:22:48.400 sys 0m4.601s 00:22:48.400 07:58:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:48.400 07:58:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.400 ************************************ 00:22:48.400 END TEST nvmf_async_init 00:22:48.400 ************************************ 00:22:48.400 07:58:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:48.400 07:58:32 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:48.400 07:58:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:48.400 07:58:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:48.400 07:58:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:48.400 ************************************ 00:22:48.400 START TEST dma 00:22:48.400 ************************************ 00:22:48.400 07:58:32 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:48.400 * Looking for test storage... 00:22:48.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:48.400 07:58:33 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:48.400 07:58:33 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:48.400 07:58:33 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:48.400 07:58:33 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:48.400 07:58:33 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.400 07:58:33 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.400 07:58:33 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.400 07:58:33 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:22:48.400 07:58:33 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:48.400 07:58:33 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:48.400 07:58:33 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:48.400 07:58:33 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:22:48.400 00:22:48.400 real 0m0.119s 00:22:48.400 user 0m0.056s 00:22:48.400 sys 0m0.071s 00:22:48.400 07:58:33 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:48.400 07:58:33 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:22:48.400 ************************************ 00:22:48.400 END TEST dma 00:22:48.400 ************************************ 00:22:48.400 07:58:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:48.400 07:58:33 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:48.400 07:58:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:48.400 07:58:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:48.400 07:58:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:48.400 ************************************ 00:22:48.400 START TEST nvmf_identify 00:22:48.400 ************************************ 00:22:48.400 07:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:48.658 * Looking for test storage... 00:22:48.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:48.658 07:58:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:55.223 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:55.224 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:55.224 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:55.224 Found net devices under 0000:86:00.0: cvl_0_0 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:55.224 Found net devices under 0000:86:00.1: cvl_0_1 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:55.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:22:55.224 00:22:55.224 --- 10.0.0.2 ping statistics --- 00:22:55.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.224 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:55.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:22:55.224 00:22:55.224 --- 10.0.0.1 ping statistics --- 00:22:55.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.224 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:55.224 07:58:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:55.224 07:58:39 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:55.224 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:55.224 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:55.224 07:58:39 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3334865 00:22:55.224 07:58:39 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:55.224 07:58:39 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:55.224 07:58:39 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3334865 00:22:55.224 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 3334865 ']' 00:22:55.224 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.224 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:55.224 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.224 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:55.224 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:55.224 [2024-07-15 07:58:39.073245] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:22:55.224 [2024-07-15 07:58:39.073296] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.224 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.224 [2024-07-15 07:58:39.145774] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:55.225 [2024-07-15 07:58:39.227920] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.225 [2024-07-15 07:58:39.227957] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.225 [2024-07-15 07:58:39.227964] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.225 [2024-07-15 07:58:39.227970] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.225 [2024-07-15 07:58:39.227975] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.225 [2024-07-15 07:58:39.228022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.225 [2024-07-15 07:58:39.228052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.225 [2024-07-15 07:58:39.228158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.225 [2024-07-15 07:58:39.228160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:55.225 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:55.225 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:22:55.225 07:58:39 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:55.225 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.225 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:55.225 [2024-07-15 07:58:39.901994] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.225 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.225 07:58:39 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:55.225 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:55.225 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:55.225 07:58:39 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:55.225 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.225 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:55.225 Malloc0 00:22:55.225 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.225 07:58:39 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:55.225 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.225 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:55.225 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.225 07:58:39 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:55.225 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.487 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:55.487 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.487 07:58:39 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:55.487 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.487 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:55.487 [2024-07-15 07:58:39.985736] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.487 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.487 07:58:39 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:55.487 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.487 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:55.487 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.487 07:58:39 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:55.487 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.487 07:58:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:55.487 [ 00:22:55.487 { 00:22:55.487 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:55.487 "subtype": "Discovery", 00:22:55.487 "listen_addresses": [ 00:22:55.487 { 00:22:55.487 "trtype": "TCP", 00:22:55.487 "adrfam": "IPv4", 00:22:55.487 "traddr": "10.0.0.2", 00:22:55.487 "trsvcid": "4420" 00:22:55.487 } 00:22:55.487 ], 00:22:55.487 "allow_any_host": true, 00:22:55.487 "hosts": [] 00:22:55.487 }, 00:22:55.487 { 00:22:55.487 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.487 "subtype": "NVMe", 00:22:55.487 "listen_addresses": [ 00:22:55.487 { 00:22:55.487 "trtype": "TCP", 00:22:55.487 "adrfam": "IPv4", 00:22:55.487 "traddr": "10.0.0.2", 00:22:55.487 "trsvcid": "4420" 00:22:55.487 } 00:22:55.487 ], 00:22:55.487 "allow_any_host": true, 00:22:55.487 "hosts": [], 00:22:55.487 "serial_number": "SPDK00000000000001", 00:22:55.487 "model_number": "SPDK bdev Controller", 00:22:55.487 "max_namespaces": 32, 00:22:55.487 "min_cntlid": 1, 00:22:55.487 "max_cntlid": 65519, 00:22:55.487 "namespaces": [ 00:22:55.487 { 00:22:55.487 "nsid": 1, 00:22:55.487 "bdev_name": "Malloc0", 00:22:55.487 "name": "Malloc0", 00:22:55.487 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:55.487 "eui64": "ABCDEF0123456789", 00:22:55.487 "uuid": "2dfc2daa-5cee-408a-88db-87fb8cb1ace1" 00:22:55.487 } 00:22:55.487 ] 00:22:55.487 } 00:22:55.487 ] 00:22:55.487 07:58:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.487 07:58:40 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:55.487 [2024-07-15 07:58:40.037596] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:22:55.487 [2024-07-15 07:58:40.037636] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3335023 ] 00:22:55.487 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.487 [2024-07-15 07:58:40.066791] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:55.487 [2024-07-15 07:58:40.066841] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:55.487 [2024-07-15 07:58:40.066846] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:55.487 [2024-07-15 07:58:40.066858] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:55.487 [2024-07-15 07:58:40.066863] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:55.487 [2024-07-15 07:58:40.067080] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:55.487 [2024-07-15 07:58:40.067110] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2268ec0 0 00:22:55.487 [2024-07-15 07:58:40.081239] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:55.487 [2024-07-15 07:58:40.081255] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:55.487 [2024-07-15 07:58:40.081260] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:55.487 [2024-07-15 07:58:40.081263] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:55.487 [2024-07-15 07:58:40.081301] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.487 [2024-07-15 07:58:40.081308] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.487 [2024-07-15 07:58:40.081311] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2268ec0) 00:22:55.487 [2024-07-15 07:58:40.081325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:55.487 [2024-07-15 07:58:40.081341] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ebe40, cid 0, qid 0 00:22:55.487 [2024-07-15 07:58:40.088235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.487 [2024-07-15 07:58:40.088243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.487 [2024-07-15 07:58:40.088246] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.487 [2024-07-15 07:58:40.088251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ebe40) on tqpair=0x2268ec0 00:22:55.487 [2024-07-15 07:58:40.088261] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:55.487 [2024-07-15 07:58:40.088271] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:55.487 [2024-07-15 07:58:40.088275] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:55.487 [2024-07-15 07:58:40.088288] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.487 [2024-07-15 07:58:40.088292] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.487 [2024-07-15 07:58:40.088295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2268ec0) 00:22:55.487 [2024-07-15 07:58:40.088302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.487 [2024-07-15 07:58:40.088314] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ebe40, cid 0, qid 0 00:22:55.487 [2024-07-15 07:58:40.088506] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.487 [2024-07-15 07:58:40.088512] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.487 [2024-07-15 07:58:40.088515] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.487 [2024-07-15 07:58:40.088518] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ebe40) on tqpair=0x2268ec0 00:22:55.487 [2024-07-15 07:58:40.088523] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:55.487 [2024-07-15 07:58:40.088529] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:55.487 [2024-07-15 07:58:40.088535] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.487 [2024-07-15 07:58:40.088538] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.487 [2024-07-15 07:58:40.088541] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2268ec0) 00:22:55.487 [2024-07-15 07:58:40.088547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.487 [2024-07-15 07:58:40.088557] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ebe40, cid 0, qid 0 00:22:55.487 [2024-07-15 07:58:40.088625] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.487 [2024-07-15 07:58:40.088630] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.487 [2024-07-15 07:58:40.088633] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.487 [2024-07-15 07:58:40.088637] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ebe40) on tqpair=0x2268ec0 00:22:55.487 [2024-07-15 07:58:40.088641] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:55.487 [2024-07-15 07:58:40.088648] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:55.487 [2024-07-15 07:58:40.088654] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.487 [2024-07-15 07:58:40.088657] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.487 [2024-07-15 07:58:40.088660] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2268ec0) 00:22:55.487 [2024-07-15 07:58:40.088666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.487 [2024-07-15 07:58:40.088675] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ebe40, cid 0, qid 0 00:22:55.487 [2024-07-15 07:58:40.088742] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.487 [2024-07-15 07:58:40.088748] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.487 [2024-07-15 07:58:40.088751] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.487 [2024-07-15 07:58:40.088754] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ebe40) on tqpair=0x2268ec0 00:22:55.487 [2024-07-15 07:58:40.088759] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:55.487 [2024-07-15 07:58:40.088769] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.487 [2024-07-15 07:58:40.088773] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.487 [2024-07-15 07:58:40.088776] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2268ec0) 00:22:55.487 [2024-07-15 07:58:40.088781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.487 [2024-07-15 07:58:40.088790] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ebe40, cid 0, qid 0 00:22:55.487 [2024-07-15 07:58:40.088858] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.487 [2024-07-15 07:58:40.088864] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.487 [2024-07-15 07:58:40.088867] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.487 [2024-07-15 07:58:40.088870] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ebe40) on tqpair=0x2268ec0 00:22:55.487 [2024-07-15 07:58:40.088874] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:55.487 [2024-07-15 07:58:40.088879] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:55.487 [2024-07-15 07:58:40.088885] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:55.487 [2024-07-15 07:58:40.088990] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:55.488 [2024-07-15 07:58:40.088994] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:55.488 [2024-07-15 07:58:40.089003] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.089006] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.089009] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2268ec0) 00:22:55.488 [2024-07-15 07:58:40.089014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.488 [2024-07-15 07:58:40.089023] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ebe40, cid 0, qid 0 00:22:55.488 [2024-07-15 07:58:40.089089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.488 [2024-07-15 07:58:40.089094] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.488 [2024-07-15 07:58:40.089097] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.089100] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ebe40) on tqpair=0x2268ec0 00:22:55.488 [2024-07-15 07:58:40.089105] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:55.488 [2024-07-15 07:58:40.089112] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.089116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.089119] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2268ec0) 00:22:55.488 [2024-07-15 07:58:40.089125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.488 [2024-07-15 07:58:40.089134] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ebe40, cid 0, qid 0 00:22:55.488 [2024-07-15 07:58:40.089206] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.488 [2024-07-15 07:58:40.089211] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.488 [2024-07-15 07:58:40.089214] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.089217] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ebe40) on tqpair=0x2268ec0 00:22:55.488 [2024-07-15 07:58:40.089223] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:55.488 [2024-07-15 07:58:40.089235] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:55.488 [2024-07-15 07:58:40.089241] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:55.488 [2024-07-15 07:58:40.089249] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:55.488 [2024-07-15 07:58:40.089256] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.089260] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2268ec0) 00:22:55.488 [2024-07-15 07:58:40.089266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.488 [2024-07-15 07:58:40.089275] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ebe40, cid 0, qid 0 00:22:55.488 [2024-07-15 07:58:40.089376] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:55.488 [2024-07-15 07:58:40.089382] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:55.488 [2024-07-15 07:58:40.089385] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.089389] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2268ec0): datao=0, datal=4096, cccid=0 00:22:55.488 [2024-07-15 07:58:40.089393] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22ebe40) on tqpair(0x2268ec0): expected_datao=0, payload_size=4096 00:22:55.488 [2024-07-15 07:58:40.089396] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.089413] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.089418] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.130416] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.488 [2024-07-15 07:58:40.130426] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.488 [2024-07-15 07:58:40.130429] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.130433] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ebe40) on tqpair=0x2268ec0 00:22:55.488 [2024-07-15 07:58:40.130441] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:55.488 [2024-07-15 07:58:40.130449] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:55.488 [2024-07-15 07:58:40.130454] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:55.488 [2024-07-15 07:58:40.130458] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:55.488 [2024-07-15 07:58:40.130462] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:55.488 [2024-07-15 07:58:40.130466] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:55.488 [2024-07-15 07:58:40.130474] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:55.488 [2024-07-15 07:58:40.130481] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.130485] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.130488] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2268ec0) 00:22:55.488 [2024-07-15 07:58:40.130495] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:55.488 [2024-07-15 07:58:40.130508] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ebe40, cid 0, qid 0 00:22:55.488 [2024-07-15 07:58:40.130614] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.488 [2024-07-15 07:58:40.130620] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.488 [2024-07-15 07:58:40.130623] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.130627] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ebe40) on tqpair=0x2268ec0 00:22:55.488 [2024-07-15 07:58:40.130633] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.130636] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.130639] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2268ec0) 00:22:55.488 [2024-07-15 07:58:40.130645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.488 [2024-07-15 07:58:40.130650] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.130653] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.130656] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2268ec0) 00:22:55.488 [2024-07-15 07:58:40.130661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.488 [2024-07-15 07:58:40.130666] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.130669] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.130672] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2268ec0) 00:22:55.488 [2024-07-15 07:58:40.130677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.488 [2024-07-15 07:58:40.130682] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.130685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.130688] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.488 [2024-07-15 07:58:40.130693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.488 [2024-07-15 07:58:40.130697] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:55.488 [2024-07-15 07:58:40.130707] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:55.488 [2024-07-15 07:58:40.130712] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.130715] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2268ec0) 00:22:55.488 [2024-07-15 07:58:40.130721] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.488 [2024-07-15 07:58:40.130731] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ebe40, cid 0, qid 0 00:22:55.488 [2024-07-15 07:58:40.130736] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ebfc0, cid 1, qid 0 00:22:55.488 [2024-07-15 07:58:40.130740] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec140, cid 2, qid 0 00:22:55.488 [2024-07-15 07:58:40.130744] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.488 [2024-07-15 07:58:40.130748] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec440, cid 4, qid 0 00:22:55.488 [2024-07-15 07:58:40.130845] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.488 [2024-07-15 07:58:40.130850] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.488 [2024-07-15 07:58:40.130853] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.130857] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec440) on tqpair=0x2268ec0 00:22:55.488 [2024-07-15 07:58:40.130863] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:55.488 [2024-07-15 07:58:40.130868] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:55.488 [2024-07-15 07:58:40.130878] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.130882] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2268ec0) 00:22:55.488 [2024-07-15 07:58:40.130887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.488 [2024-07-15 07:58:40.130896] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec440, cid 4, qid 0 00:22:55.488 [2024-07-15 07:58:40.131022] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:55.488 [2024-07-15 07:58:40.131028] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:55.488 [2024-07-15 07:58:40.131031] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.131034] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2268ec0): datao=0, datal=4096, cccid=4 00:22:55.488 [2024-07-15 07:58:40.131038] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22ec440) on tqpair(0x2268ec0): expected_datao=0, payload_size=4096 00:22:55.488 [2024-07-15 07:58:40.131042] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.131048] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.131051] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.131063] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.488 [2024-07-15 07:58:40.131068] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.488 [2024-07-15 07:58:40.131071] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.131074] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec440) on tqpair=0x2268ec0 00:22:55.488 [2024-07-15 07:58:40.131086] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:55.488 [2024-07-15 07:58:40.131108] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.488 [2024-07-15 07:58:40.131112] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2268ec0) 00:22:55.489 [2024-07-15 07:58:40.131117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.489 [2024-07-15 07:58:40.131123] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.489 [2024-07-15 07:58:40.131126] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.489 [2024-07-15 07:58:40.131129] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2268ec0) 00:22:55.489 [2024-07-15 07:58:40.131134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.489 [2024-07-15 07:58:40.131147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec440, cid 4, qid 0 00:22:55.489 [2024-07-15 07:58:40.131151] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec5c0, cid 5, qid 0 00:22:55.489 [2024-07-15 07:58:40.131300] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:55.489 [2024-07-15 07:58:40.131306] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:55.489 [2024-07-15 07:58:40.131309] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:55.489 [2024-07-15 07:58:40.131312] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2268ec0): datao=0, datal=1024, cccid=4 00:22:55.489 [2024-07-15 07:58:40.131316] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22ec440) on tqpair(0x2268ec0): expected_datao=0, payload_size=1024 00:22:55.489 [2024-07-15 07:58:40.131321] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.489 [2024-07-15 07:58:40.131327] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:55.489 [2024-07-15 07:58:40.131330] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:55.489 [2024-07-15 07:58:40.131335] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.489 [2024-07-15 07:58:40.131339] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.489 [2024-07-15 07:58:40.131342] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.489 [2024-07-15 07:58:40.131346] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec5c0) on tqpair=0x2268ec0 00:22:55.489 [2024-07-15 07:58:40.176233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.489 [2024-07-15 07:58:40.176245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.489 [2024-07-15 07:58:40.176248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.489 [2024-07-15 07:58:40.176252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec440) on tqpair=0x2268ec0 00:22:55.489 [2024-07-15 07:58:40.176269] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.489 [2024-07-15 07:58:40.176279] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2268ec0) 00:22:55.489 [2024-07-15 07:58:40.176286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.489 [2024-07-15 07:58:40.176303] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec440, cid 4, qid 0 00:22:55.489 [2024-07-15 07:58:40.176466] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:55.489 [2024-07-15 07:58:40.176472] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:55.489 [2024-07-15 07:58:40.176476] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:55.489 [2024-07-15 07:58:40.176479] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2268ec0): datao=0, datal=3072, cccid=4 00:22:55.489 [2024-07-15 07:58:40.176484] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22ec440) on tqpair(0x2268ec0): expected_datao=0, payload_size=3072 00:22:55.489 [2024-07-15 07:58:40.176488] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.489 [2024-07-15 07:58:40.176494] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:55.489 [2024-07-15 07:58:40.176497] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:55.489 [2024-07-15 07:58:40.217406] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.489 [2024-07-15 07:58:40.217415] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.489 [2024-07-15 07:58:40.217419] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.489 [2024-07-15 07:58:40.217422] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec440) on tqpair=0x2268ec0 00:22:55.489 [2024-07-15 07:58:40.217430] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.489 [2024-07-15 07:58:40.217434] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2268ec0) 00:22:55.489 [2024-07-15 07:58:40.217440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.489 [2024-07-15 07:58:40.217453] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec440, cid 4, qid 0 00:22:55.489 [2024-07-15 07:58:40.217557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:55.489 [2024-07-15 07:58:40.217563] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:55.489 [2024-07-15 07:58:40.217566] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:55.489 [2024-07-15 07:58:40.217569] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2268ec0): datao=0, datal=8, cccid=4 00:22:55.489 [2024-07-15 07:58:40.217573] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22ec440) on tqpair(0x2268ec0): expected_datao=0, payload_size=8 00:22:55.489 [2024-07-15 07:58:40.217576] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.489 [2024-07-15 07:58:40.217585] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:55.489 [2024-07-15 07:58:40.217588] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:55.756 [2024-07-15 07:58:40.260236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.756 [2024-07-15 07:58:40.260251] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.756 [2024-07-15 07:58:40.260255] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.756 [2024-07-15 07:58:40.260259] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec440) on tqpair=0x2268ec0 00:22:55.756 ===================================================== 00:22:55.756 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:55.757 ===================================================== 00:22:55.757 Controller Capabilities/Features 00:22:55.757 ================================ 00:22:55.757 Vendor ID: 0000 00:22:55.757 Subsystem Vendor ID: 0000 00:22:55.757 Serial Number: .................... 00:22:55.757 Model Number: ........................................ 00:22:55.757 Firmware Version: 24.09 00:22:55.757 Recommended Arb Burst: 0 00:22:55.757 IEEE OUI Identifier: 00 00 00 00:22:55.757 Multi-path I/O 00:22:55.757 May have multiple subsystem ports: No 00:22:55.757 May have multiple controllers: No 00:22:55.757 Associated with SR-IOV VF: No 00:22:55.757 Max Data Transfer Size: 131072 00:22:55.757 Max Number of Namespaces: 0 00:22:55.757 Max Number of I/O Queues: 1024 00:22:55.757 NVMe Specification Version (VS): 1.3 00:22:55.757 NVMe Specification Version (Identify): 1.3 00:22:55.757 Maximum Queue Entries: 128 00:22:55.757 Contiguous Queues Required: Yes 00:22:55.757 Arbitration Mechanisms Supported 00:22:55.757 Weighted Round Robin: Not Supported 00:22:55.757 Vendor Specific: Not Supported 00:22:55.757 Reset Timeout: 15000 ms 00:22:55.757 Doorbell Stride: 4 bytes 00:22:55.757 NVM Subsystem Reset: Not Supported 00:22:55.757 Command Sets Supported 00:22:55.757 NVM Command Set: Supported 00:22:55.757 Boot Partition: Not Supported 00:22:55.757 Memory Page Size Minimum: 4096 bytes 00:22:55.757 Memory Page Size Maximum: 4096 bytes 00:22:55.757 Persistent Memory Region: Not Supported 00:22:55.757 Optional Asynchronous Events Supported 00:22:55.757 Namespace Attribute Notices: Not Supported 00:22:55.757 Firmware Activation Notices: Not Supported 00:22:55.757 ANA Change Notices: Not Supported 00:22:55.757 PLE Aggregate Log Change Notices: Not Supported 00:22:55.757 LBA Status Info Alert Notices: Not Supported 00:22:55.757 EGE Aggregate Log Change Notices: Not Supported 00:22:55.757 Normal NVM Subsystem Shutdown event: Not Supported 00:22:55.757 Zone Descriptor Change Notices: Not Supported 00:22:55.757 Discovery Log Change Notices: Supported 00:22:55.757 Controller Attributes 00:22:55.757 128-bit Host Identifier: Not Supported 00:22:55.757 Non-Operational Permissive Mode: Not Supported 00:22:55.757 NVM Sets: Not Supported 00:22:55.757 Read Recovery Levels: Not Supported 00:22:55.757 Endurance Groups: Not Supported 00:22:55.757 Predictable Latency Mode: Not Supported 00:22:55.757 Traffic Based Keep ALive: Not Supported 00:22:55.757 Namespace Granularity: Not Supported 00:22:55.757 SQ Associations: Not Supported 00:22:55.757 UUID List: Not Supported 00:22:55.757 Multi-Domain Subsystem: Not Supported 00:22:55.757 Fixed Capacity Management: Not Supported 00:22:55.757 Variable Capacity Management: Not Supported 00:22:55.757 Delete Endurance Group: Not Supported 00:22:55.757 Delete NVM Set: Not Supported 00:22:55.757 Extended LBA Formats Supported: Not Supported 00:22:55.757 Flexible Data Placement Supported: Not Supported 00:22:55.757 00:22:55.757 Controller Memory Buffer Support 00:22:55.757 ================================ 00:22:55.757 Supported: No 00:22:55.757 00:22:55.757 Persistent Memory Region Support 00:22:55.757 ================================ 00:22:55.757 Supported: No 00:22:55.757 00:22:55.757 Admin Command Set Attributes 00:22:55.757 ============================ 00:22:55.757 Security Send/Receive: Not Supported 00:22:55.757 Format NVM: Not Supported 00:22:55.757 Firmware Activate/Download: Not Supported 00:22:55.757 Namespace Management: Not Supported 00:22:55.757 Device Self-Test: Not Supported 00:22:55.757 Directives: Not Supported 00:22:55.757 NVMe-MI: Not Supported 00:22:55.757 Virtualization Management: Not Supported 00:22:55.757 Doorbell Buffer Config: Not Supported 00:22:55.757 Get LBA Status Capability: Not Supported 00:22:55.757 Command & Feature Lockdown Capability: Not Supported 00:22:55.757 Abort Command Limit: 1 00:22:55.757 Async Event Request Limit: 4 00:22:55.757 Number of Firmware Slots: N/A 00:22:55.757 Firmware Slot 1 Read-Only: N/A 00:22:55.757 Firmware Activation Without Reset: N/A 00:22:55.757 Multiple Update Detection Support: N/A 00:22:55.757 Firmware Update Granularity: No Information Provided 00:22:55.757 Per-Namespace SMART Log: No 00:22:55.757 Asymmetric Namespace Access Log Page: Not Supported 00:22:55.757 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:55.757 Command Effects Log Page: Not Supported 00:22:55.757 Get Log Page Extended Data: Supported 00:22:55.757 Telemetry Log Pages: Not Supported 00:22:55.757 Persistent Event Log Pages: Not Supported 00:22:55.757 Supported Log Pages Log Page: May Support 00:22:55.757 Commands Supported & Effects Log Page: Not Supported 00:22:55.757 Feature Identifiers & Effects Log Page:May Support 00:22:55.757 NVMe-MI Commands & Effects Log Page: May Support 00:22:55.757 Data Area 4 for Telemetry Log: Not Supported 00:22:55.757 Error Log Page Entries Supported: 128 00:22:55.757 Keep Alive: Not Supported 00:22:55.757 00:22:55.757 NVM Command Set Attributes 00:22:55.757 ========================== 00:22:55.757 Submission Queue Entry Size 00:22:55.757 Max: 1 00:22:55.757 Min: 1 00:22:55.757 Completion Queue Entry Size 00:22:55.757 Max: 1 00:22:55.757 Min: 1 00:22:55.757 Number of Namespaces: 0 00:22:55.757 Compare Command: Not Supported 00:22:55.757 Write Uncorrectable Command: Not Supported 00:22:55.757 Dataset Management Command: Not Supported 00:22:55.757 Write Zeroes Command: Not Supported 00:22:55.757 Set Features Save Field: Not Supported 00:22:55.757 Reservations: Not Supported 00:22:55.757 Timestamp: Not Supported 00:22:55.757 Copy: Not Supported 00:22:55.757 Volatile Write Cache: Not Present 00:22:55.757 Atomic Write Unit (Normal): 1 00:22:55.757 Atomic Write Unit (PFail): 1 00:22:55.757 Atomic Compare & Write Unit: 1 00:22:55.757 Fused Compare & Write: Supported 00:22:55.757 Scatter-Gather List 00:22:55.757 SGL Command Set: Supported 00:22:55.757 SGL Keyed: Supported 00:22:55.757 SGL Bit Bucket Descriptor: Not Supported 00:22:55.757 SGL Metadata Pointer: Not Supported 00:22:55.757 Oversized SGL: Not Supported 00:22:55.757 SGL Metadata Address: Not Supported 00:22:55.757 SGL Offset: Supported 00:22:55.757 Transport SGL Data Block: Not Supported 00:22:55.757 Replay Protected Memory Block: Not Supported 00:22:55.757 00:22:55.757 Firmware Slot Information 00:22:55.757 ========================= 00:22:55.757 Active slot: 0 00:22:55.757 00:22:55.757 00:22:55.757 Error Log 00:22:55.757 ========= 00:22:55.757 00:22:55.757 Active Namespaces 00:22:55.757 ================= 00:22:55.757 Discovery Log Page 00:22:55.757 ================== 00:22:55.757 Generation Counter: 2 00:22:55.757 Number of Records: 2 00:22:55.757 Record Format: 0 00:22:55.757 00:22:55.757 Discovery Log Entry 0 00:22:55.757 ---------------------- 00:22:55.757 Transport Type: 3 (TCP) 00:22:55.757 Address Family: 1 (IPv4) 00:22:55.757 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:55.757 Entry Flags: 00:22:55.757 Duplicate Returned Information: 1 00:22:55.757 Explicit Persistent Connection Support for Discovery: 1 00:22:55.757 Transport Requirements: 00:22:55.757 Secure Channel: Not Required 00:22:55.757 Port ID: 0 (0x0000) 00:22:55.757 Controller ID: 65535 (0xffff) 00:22:55.757 Admin Max SQ Size: 128 00:22:55.757 Transport Service Identifier: 4420 00:22:55.757 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:55.757 Transport Address: 10.0.0.2 00:22:55.757 Discovery Log Entry 1 00:22:55.757 ---------------------- 00:22:55.757 Transport Type: 3 (TCP) 00:22:55.757 Address Family: 1 (IPv4) 00:22:55.757 Subsystem Type: 2 (NVM Subsystem) 00:22:55.757 Entry Flags: 00:22:55.757 Duplicate Returned Information: 0 00:22:55.757 Explicit Persistent Connection Support for Discovery: 0 00:22:55.757 Transport Requirements: 00:22:55.757 Secure Channel: Not Required 00:22:55.757 Port ID: 0 (0x0000) 00:22:55.757 Controller ID: 65535 (0xffff) 00:22:55.757 Admin Max SQ Size: 128 00:22:55.757 Transport Service Identifier: 4420 00:22:55.757 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:55.757 Transport Address: 10.0.0.2 [2024-07-15 07:58:40.260339] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:55.757 [2024-07-15 07:58:40.260349] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ebe40) on tqpair=0x2268ec0 00:22:55.757 [2024-07-15 07:58:40.260356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.757 [2024-07-15 07:58:40.260361] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ebfc0) on tqpair=0x2268ec0 00:22:55.757 [2024-07-15 07:58:40.260365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.757 [2024-07-15 07:58:40.260369] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec140) on tqpair=0x2268ec0 00:22:55.757 [2024-07-15 07:58:40.260373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.757 [2024-07-15 07:58:40.260377] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.757 [2024-07-15 07:58:40.260381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.758 [2024-07-15 07:58:40.260390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.260394] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.260397] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.758 [2024-07-15 07:58:40.260404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.758 [2024-07-15 07:58:40.260418] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.758 [2024-07-15 07:58:40.260565] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.758 [2024-07-15 07:58:40.260571] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.758 [2024-07-15 07:58:40.260574] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.260577] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.758 [2024-07-15 07:58:40.260583] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.260586] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.260589] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.758 [2024-07-15 07:58:40.260595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.758 [2024-07-15 07:58:40.260609] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.758 [2024-07-15 07:58:40.260742] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.758 [2024-07-15 07:58:40.260747] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.758 [2024-07-15 07:58:40.260750] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.260754] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.758 [2024-07-15 07:58:40.260758] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:55.758 [2024-07-15 07:58:40.260763] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:55.758 [2024-07-15 07:58:40.260773] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.260777] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.260780] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.758 [2024-07-15 07:58:40.260785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.758 [2024-07-15 07:58:40.260794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.758 [2024-07-15 07:58:40.260891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.758 [2024-07-15 07:58:40.260896] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.758 [2024-07-15 07:58:40.260899] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.260903] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.758 [2024-07-15 07:58:40.260911] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.260914] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.260917] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.758 [2024-07-15 07:58:40.260923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.758 [2024-07-15 07:58:40.260932] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.758 [2024-07-15 07:58:40.261003] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.758 [2024-07-15 07:58:40.261008] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.758 [2024-07-15 07:58:40.261011] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.261014] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.758 [2024-07-15 07:58:40.261021] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.261025] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.261028] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.758 [2024-07-15 07:58:40.261033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.758 [2024-07-15 07:58:40.261042] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.758 [2024-07-15 07:58:40.261141] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.758 [2024-07-15 07:58:40.261146] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.758 [2024-07-15 07:58:40.261149] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.261152] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.758 [2024-07-15 07:58:40.261160] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.261164] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.261167] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.758 [2024-07-15 07:58:40.261172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.758 [2024-07-15 07:58:40.261181] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.758 [2024-07-15 07:58:40.261292] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.758 [2024-07-15 07:58:40.261298] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.758 [2024-07-15 07:58:40.261301] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.261304] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.758 [2024-07-15 07:58:40.261312] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.261317] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.261320] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.758 [2024-07-15 07:58:40.261325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.758 [2024-07-15 07:58:40.261335] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.758 [2024-07-15 07:58:40.261443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.758 [2024-07-15 07:58:40.261449] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.758 [2024-07-15 07:58:40.261451] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.261455] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.758 [2024-07-15 07:58:40.261463] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.261466] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.261469] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.758 [2024-07-15 07:58:40.261474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.758 [2024-07-15 07:58:40.261483] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.758 [2024-07-15 07:58:40.261559] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.758 [2024-07-15 07:58:40.261565] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.758 [2024-07-15 07:58:40.261567] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.261571] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.758 [2024-07-15 07:58:40.261579] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.261582] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.261585] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.758 [2024-07-15 07:58:40.261591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.758 [2024-07-15 07:58:40.261600] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.758 [2024-07-15 07:58:40.261697] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.758 [2024-07-15 07:58:40.261702] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.758 [2024-07-15 07:58:40.261705] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.261708] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.758 [2024-07-15 07:58:40.261716] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.261719] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.261722] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.758 [2024-07-15 07:58:40.261728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.758 [2024-07-15 07:58:40.261736] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.758 [2024-07-15 07:58:40.261848] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.758 [2024-07-15 07:58:40.261854] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.758 [2024-07-15 07:58:40.261856] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.261860] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.758 [2024-07-15 07:58:40.261867] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.261871] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.261875] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.758 [2024-07-15 07:58:40.261881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.758 [2024-07-15 07:58:40.261890] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.758 [2024-07-15 07:58:40.261997] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.758 [2024-07-15 07:58:40.262003] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.758 [2024-07-15 07:58:40.262006] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.262009] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.758 [2024-07-15 07:58:40.262017] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.262020] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.262023] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.758 [2024-07-15 07:58:40.262028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.758 [2024-07-15 07:58:40.262037] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.758 [2024-07-15 07:58:40.262116] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.758 [2024-07-15 07:58:40.262121] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.758 [2024-07-15 07:58:40.262124] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.262127] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.758 [2024-07-15 07:58:40.262136] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.262139] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.758 [2024-07-15 07:58:40.262142] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.758 [2024-07-15 07:58:40.262148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-07-15 07:58:40.262157] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.759 [2024-07-15 07:58:40.262251] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.759 [2024-07-15 07:58:40.262257] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.759 [2024-07-15 07:58:40.262260] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.262263] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.759 [2024-07-15 07:58:40.262270] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.262274] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.262277] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.759 [2024-07-15 07:58:40.262282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-07-15 07:58:40.262291] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.759 [2024-07-15 07:58:40.262403] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.759 [2024-07-15 07:58:40.262408] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.759 [2024-07-15 07:58:40.262411] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.262414] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.759 [2024-07-15 07:58:40.262422] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.262426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.262429] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.759 [2024-07-15 07:58:40.262437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-07-15 07:58:40.262446] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.759 [2024-07-15 07:58:40.262553] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.759 [2024-07-15 07:58:40.262558] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.759 [2024-07-15 07:58:40.262561] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.262564] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.759 [2024-07-15 07:58:40.262572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.262576] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.262578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.759 [2024-07-15 07:58:40.262584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-07-15 07:58:40.262593] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.759 [2024-07-15 07:58:40.262667] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.759 [2024-07-15 07:58:40.262672] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.759 [2024-07-15 07:58:40.262675] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.262678] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.759 [2024-07-15 07:58:40.262687] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.262690] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.262693] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.759 [2024-07-15 07:58:40.262698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-07-15 07:58:40.262708] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.759 [2024-07-15 07:58:40.262805] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.759 [2024-07-15 07:58:40.262810] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.759 [2024-07-15 07:58:40.262813] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.262817] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.759 [2024-07-15 07:58:40.262825] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.262828] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.262831] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.759 [2024-07-15 07:58:40.262837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-07-15 07:58:40.262846] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.759 [2024-07-15 07:58:40.262969] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.759 [2024-07-15 07:58:40.262974] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.759 [2024-07-15 07:58:40.262977] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.262980] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.759 [2024-07-15 07:58:40.262989] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.262992] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.262995] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.759 [2024-07-15 07:58:40.263001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-07-15 07:58:40.263012] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.759 [2024-07-15 07:58:40.263107] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.759 [2024-07-15 07:58:40.263113] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.759 [2024-07-15 07:58:40.263116] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.263119] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.759 [2024-07-15 07:58:40.263127] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.263130] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.263133] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.759 [2024-07-15 07:58:40.263139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-07-15 07:58:40.263147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.759 [2024-07-15 07:58:40.263216] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.759 [2024-07-15 07:58:40.263221] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.759 [2024-07-15 07:58:40.263229] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.263232] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.759 [2024-07-15 07:58:40.263240] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.263244] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.263246] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.759 [2024-07-15 07:58:40.263252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-07-15 07:58:40.263261] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.759 [2024-07-15 07:58:40.263359] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.759 [2024-07-15 07:58:40.263364] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.759 [2024-07-15 07:58:40.263367] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.263370] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.759 [2024-07-15 07:58:40.263378] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.263381] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.263384] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.759 [2024-07-15 07:58:40.263390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-07-15 07:58:40.263398] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.759 [2024-07-15 07:58:40.263510] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.759 [2024-07-15 07:58:40.263515] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.759 [2024-07-15 07:58:40.263518] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.263521] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.759 [2024-07-15 07:58:40.263529] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.263533] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.263536] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.759 [2024-07-15 07:58:40.263541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-07-15 07:58:40.263551] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.759 [2024-07-15 07:58:40.263661] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.759 [2024-07-15 07:58:40.263666] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.759 [2024-07-15 07:58:40.263669] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.263672] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.759 [2024-07-15 07:58:40.263680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.263684] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.263686] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.759 [2024-07-15 07:58:40.263692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-07-15 07:58:40.263700] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.759 [2024-07-15 07:58:40.263766] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.759 [2024-07-15 07:58:40.263771] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.759 [2024-07-15 07:58:40.263774] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.263777] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.759 [2024-07-15 07:58:40.263785] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.263788] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.263791] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.759 [2024-07-15 07:58:40.263797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-07-15 07:58:40.263806] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.759 [2024-07-15 07:58:40.263915] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.759 [2024-07-15 07:58:40.263920] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.759 [2024-07-15 07:58:40.263923] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.263926] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.759 [2024-07-15 07:58:40.263934] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.263937] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.759 [2024-07-15 07:58:40.263940] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.760 [2024-07-15 07:58:40.263946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.760 [2024-07-15 07:58:40.263954] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.760 [2024-07-15 07:58:40.264066] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.760 [2024-07-15 07:58:40.264071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.760 [2024-07-15 07:58:40.264074] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.760 [2024-07-15 07:58:40.264077] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.760 [2024-07-15 07:58:40.264085] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.760 [2024-07-15 07:58:40.264088] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.760 [2024-07-15 07:58:40.264091] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.760 [2024-07-15 07:58:40.264097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.760 [2024-07-15 07:58:40.264106] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.760 [2024-07-15 07:58:40.264216] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.760 [2024-07-15 07:58:40.264221] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.760 [2024-07-15 07:58:40.268230] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.760 [2024-07-15 07:58:40.268235] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.760 [2024-07-15 07:58:40.268245] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.760 [2024-07-15 07:58:40.268248] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.760 [2024-07-15 07:58:40.268251] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2268ec0) 00:22:55.760 [2024-07-15 07:58:40.268257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.760 [2024-07-15 07:58:40.268267] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ec2c0, cid 3, qid 0 00:22:55.760 [2024-07-15 07:58:40.268422] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.760 [2024-07-15 07:58:40.268428] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.760 [2024-07-15 07:58:40.268430] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.760 [2024-07-15 07:58:40.268433] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ec2c0) on tqpair=0x2268ec0 00:22:55.760 [2024-07-15 07:58:40.268440] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:22:55.760 00:22:55.760 07:58:40 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:55.760 [2024-07-15 07:58:40.306495] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:22:55.760 [2024-07-15 07:58:40.306544] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3335121 ] 00:22:55.760 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.760 [2024-07-15 07:58:40.335544] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:55.760 [2024-07-15 07:58:40.335584] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:55.760 [2024-07-15 07:58:40.335589] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:55.760 [2024-07-15 07:58:40.335599] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:55.760 [2024-07-15 07:58:40.335605] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:55.760 [2024-07-15 07:58:40.335818] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:55.760 [2024-07-15 07:58:40.335841] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xbb6ec0 0 00:22:55.760 [2024-07-15 07:58:40.350236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:55.760 [2024-07-15 07:58:40.350246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:55.760 [2024-07-15 07:58:40.350249] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:55.760 [2024-07-15 07:58:40.350252] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:55.760 [2024-07-15 07:58:40.350279] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.760 [2024-07-15 07:58:40.350284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.760 [2024-07-15 07:58:40.350288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbb6ec0) 00:22:55.760 [2024-07-15 07:58:40.350300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:55.760 [2024-07-15 07:58:40.350314] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc39e40, cid 0, qid 0 00:22:55.760 [2024-07-15 07:58:40.358235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.760 [2024-07-15 07:58:40.358243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.760 [2024-07-15 07:58:40.358246] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.760 [2024-07-15 07:58:40.358250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc39e40) on tqpair=0xbb6ec0 00:22:55.760 [2024-07-15 07:58:40.358257] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:55.760 [2024-07-15 07:58:40.358262] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:55.760 [2024-07-15 07:58:40.358267] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:55.760 [2024-07-15 07:58:40.358277] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.760 [2024-07-15 07:58:40.358281] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.760 [2024-07-15 07:58:40.358284] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbb6ec0) 00:22:55.760 [2024-07-15 07:58:40.358290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.760 [2024-07-15 07:58:40.358303] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc39e40, cid 0, qid 0 00:22:55.760 [2024-07-15 07:58:40.358463] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.760 [2024-07-15 07:58:40.358469] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.760 [2024-07-15 07:58:40.358472] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.760 [2024-07-15 07:58:40.358475] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc39e40) on tqpair=0xbb6ec0 00:22:55.760 [2024-07-15 07:58:40.358479] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:55.760 [2024-07-15 07:58:40.358485] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:55.760 [2024-07-15 07:58:40.358491] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.760 [2024-07-15 07:58:40.358495] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.760 [2024-07-15 07:58:40.358498] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbb6ec0) 00:22:55.760 [2024-07-15 07:58:40.358503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.760 [2024-07-15 07:58:40.358514] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc39e40, cid 0, qid 0 00:22:55.760 [2024-07-15 07:58:40.358582] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.760 [2024-07-15 07:58:40.358587] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.760 [2024-07-15 07:58:40.358590] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.760 [2024-07-15 07:58:40.358593] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc39e40) on tqpair=0xbb6ec0 00:22:55.760 [2024-07-15 07:58:40.358598] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:55.760 [2024-07-15 07:58:40.358605] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:55.760 [2024-07-15 07:58:40.358611] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.760 [2024-07-15 07:58:40.358614] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.760 [2024-07-15 07:58:40.358617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbb6ec0) 00:22:55.760 [2024-07-15 07:58:40.358623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.760 [2024-07-15 07:58:40.358635] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc39e40, cid 0, qid 0 00:22:55.760 [2024-07-15 07:58:40.358700] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.760 [2024-07-15 07:58:40.358705] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.760 [2024-07-15 07:58:40.358708] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.760 [2024-07-15 07:58:40.358711] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc39e40) on tqpair=0xbb6ec0 00:22:55.760 [2024-07-15 07:58:40.358715] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:55.760 [2024-07-15 07:58:40.358724] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.760 [2024-07-15 07:58:40.358727] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.760 [2024-07-15 07:58:40.358730] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbb6ec0) 00:22:55.760 [2024-07-15 07:58:40.358736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.760 [2024-07-15 07:58:40.358745] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc39e40, cid 0, qid 0 00:22:55.760 [2024-07-15 07:58:40.358809] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.760 [2024-07-15 07:58:40.358815] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.760 [2024-07-15 07:58:40.358818] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.760 [2024-07-15 07:58:40.358821] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc39e40) on tqpair=0xbb6ec0 00:22:55.760 [2024-07-15 07:58:40.358824] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:55.760 [2024-07-15 07:58:40.358828] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:55.760 [2024-07-15 07:58:40.358835] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:55.760 [2024-07-15 07:58:40.358939] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:55.760 [2024-07-15 07:58:40.358943] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:55.760 [2024-07-15 07:58:40.358949] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.760 [2024-07-15 07:58:40.358952] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.760 [2024-07-15 07:58:40.358955] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbb6ec0) 00:22:55.760 [2024-07-15 07:58:40.358961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.760 [2024-07-15 07:58:40.358971] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc39e40, cid 0, qid 0 00:22:55.760 [2024-07-15 07:58:40.359038] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.760 [2024-07-15 07:58:40.359043] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.760 [2024-07-15 07:58:40.359046] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.760 [2024-07-15 07:58:40.359049] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc39e40) on tqpair=0xbb6ec0 00:22:55.761 [2024-07-15 07:58:40.359053] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:55.761 [2024-07-15 07:58:40.359061] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.761 [2024-07-15 07:58:40.359065] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.761 [2024-07-15 07:58:40.359068] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbb6ec0) 00:22:55.761 [2024-07-15 07:58:40.359075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.761 [2024-07-15 07:58:40.359085] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc39e40, cid 0, qid 0 00:22:55.761 [2024-07-15 07:58:40.359147] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.761 [2024-07-15 07:58:40.359153] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.761 [2024-07-15 07:58:40.359155] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.761 [2024-07-15 07:58:40.359159] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc39e40) on tqpair=0xbb6ec0 00:22:55.761 [2024-07-15 07:58:40.359162] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:55.761 [2024-07-15 07:58:40.359166] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:55.761 [2024-07-15 07:58:40.359173] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:55.761 [2024-07-15 07:58:40.359179] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:55.761 [2024-07-15 07:58:40.359187] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.761 [2024-07-15 07:58:40.359190] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbb6ec0) 00:22:55.761 [2024-07-15 07:58:40.359196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.761 [2024-07-15 07:58:40.359205] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc39e40, cid 0, qid 0 00:22:55.761 [2024-07-15 07:58:40.359315] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:55.761 [2024-07-15 07:58:40.359321] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:55.761 [2024-07-15 07:58:40.359324] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:55.761 [2024-07-15 07:58:40.359327] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbb6ec0): datao=0, datal=4096, cccid=0 00:22:55.761 [2024-07-15 07:58:40.359330] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc39e40) on tqpair(0xbb6ec0): expected_datao=0, payload_size=4096 00:22:55.761 [2024-07-15 07:58:40.359334] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.761 [2024-07-15 07:58:40.359344] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:55.761 [2024-07-15 07:58:40.359348] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:55.761 [2024-07-15 07:58:40.400374] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.761 [2024-07-15 07:58:40.400384] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.761 [2024-07-15 07:58:40.400387] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.761 [2024-07-15 07:58:40.400391] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc39e40) on tqpair=0xbb6ec0 00:22:55.761 [2024-07-15 07:58:40.400398] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:55.761 [2024-07-15 07:58:40.400405] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:55.761 [2024-07-15 07:58:40.400409] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:55.761 [2024-07-15 07:58:40.400412] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:55.761 [2024-07-15 07:58:40.400417] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:55.761 [2024-07-15 07:58:40.400421] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:55.761 [2024-07-15 07:58:40.400429] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:55.761 [2024-07-15 07:58:40.400438] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.761 [2024-07-15 07:58:40.400441] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.761 [2024-07-15 07:58:40.400444] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbb6ec0) 00:22:55.761 [2024-07-15 07:58:40.400451] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:55.761 [2024-07-15 07:58:40.400464] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc39e40, cid 0, qid 0 00:22:55.761 [2024-07-15 07:58:40.400534] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.761 [2024-07-15 07:58:40.400539] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.761 [2024-07-15 07:58:40.400543] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.761 [2024-07-15 07:58:40.400546] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc39e40) on tqpair=0xbb6ec0 00:22:55.761 [2024-07-15 07:58:40.400551] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.761 [2024-07-15 07:58:40.400554] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.761 [2024-07-15 07:58:40.400557] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbb6ec0) 00:22:55.761 [2024-07-15 07:58:40.400563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.761 [2024-07-15 07:58:40.400568] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.761 [2024-07-15 07:58:40.400571] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.761 [2024-07-15 07:58:40.400574] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xbb6ec0) 00:22:55.761 [2024-07-15 07:58:40.400579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.761 [2024-07-15 07:58:40.400584] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.761 [2024-07-15 07:58:40.400587] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.761 [2024-07-15 07:58:40.400590] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xbb6ec0) 00:22:55.761 [2024-07-15 07:58:40.400595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.761 [2024-07-15 07:58:40.400600] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.761 [2024-07-15 07:58:40.400603] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.761 [2024-07-15 07:58:40.400606] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.761 [2024-07-15 07:58:40.400610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.761 [2024-07-15 07:58:40.400614] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:55.761 [2024-07-15 07:58:40.400624] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:55.761 [2024-07-15 07:58:40.400630] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.761 [2024-07-15 07:58:40.400633] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbb6ec0) 00:22:55.761 [2024-07-15 07:58:40.400638] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.761 [2024-07-15 07:58:40.400649] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc39e40, cid 0, qid 0 00:22:55.761 [2024-07-15 07:58:40.400654] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc39fc0, cid 1, qid 0 00:22:55.761 [2024-07-15 07:58:40.400658] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a140, cid 2, qid 0 00:22:55.761 [2024-07-15 07:58:40.400662] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.761 [2024-07-15 07:58:40.400668] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a440, cid 4, qid 0 00:22:55.761 [2024-07-15 07:58:40.400767] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.761 [2024-07-15 07:58:40.400773] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.761 [2024-07-15 07:58:40.400776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.761 [2024-07-15 07:58:40.400779] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a440) on tqpair=0xbb6ec0 00:22:55.761 [2024-07-15 07:58:40.400783] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:55.761 [2024-07-15 07:58:40.400787] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:55.761 [2024-07-15 07:58:40.400794] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:55.761 [2024-07-15 07:58:40.400800] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:55.761 [2024-07-15 07:58:40.400805] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.400809] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.400812] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbb6ec0) 00:22:55.762 [2024-07-15 07:58:40.400817] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:55.762 [2024-07-15 07:58:40.400827] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a440, cid 4, qid 0 00:22:55.762 [2024-07-15 07:58:40.400900] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.762 [2024-07-15 07:58:40.400905] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.762 [2024-07-15 07:58:40.400908] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.400911] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a440) on tqpair=0xbb6ec0 00:22:55.762 [2024-07-15 07:58:40.400963] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:55.762 [2024-07-15 07:58:40.400971] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:55.762 [2024-07-15 07:58:40.400978] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.400981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbb6ec0) 00:22:55.762 [2024-07-15 07:58:40.400987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.762 [2024-07-15 07:58:40.400996] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a440, cid 4, qid 0 00:22:55.762 [2024-07-15 07:58:40.401075] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:55.762 [2024-07-15 07:58:40.401080] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:55.762 [2024-07-15 07:58:40.401083] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401086] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbb6ec0): datao=0, datal=4096, cccid=4 00:22:55.762 [2024-07-15 07:58:40.401090] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc3a440) on tqpair(0xbb6ec0): expected_datao=0, payload_size=4096 00:22:55.762 [2024-07-15 07:58:40.401093] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401099] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401102] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401112] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.762 [2024-07-15 07:58:40.401119] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.762 [2024-07-15 07:58:40.401122] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401125] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a440) on tqpair=0xbb6ec0 00:22:55.762 [2024-07-15 07:58:40.401134] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:55.762 [2024-07-15 07:58:40.401145] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:55.762 [2024-07-15 07:58:40.401153] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:55.762 [2024-07-15 07:58:40.401159] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401162] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbb6ec0) 00:22:55.762 [2024-07-15 07:58:40.401168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.762 [2024-07-15 07:58:40.401178] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a440, cid 4, qid 0 00:22:55.762 [2024-07-15 07:58:40.401268] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:55.762 [2024-07-15 07:58:40.401273] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:55.762 [2024-07-15 07:58:40.401277] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401279] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbb6ec0): datao=0, datal=4096, cccid=4 00:22:55.762 [2024-07-15 07:58:40.401283] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc3a440) on tqpair(0xbb6ec0): expected_datao=0, payload_size=4096 00:22:55.762 [2024-07-15 07:58:40.401287] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401292] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401295] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401311] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.762 [2024-07-15 07:58:40.401317] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.762 [2024-07-15 07:58:40.401320] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401323] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a440) on tqpair=0xbb6ec0 00:22:55.762 [2024-07-15 07:58:40.401334] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:55.762 [2024-07-15 07:58:40.401342] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:55.762 [2024-07-15 07:58:40.401348] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401351] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbb6ec0) 00:22:55.762 [2024-07-15 07:58:40.401357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.762 [2024-07-15 07:58:40.401367] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a440, cid 4, qid 0 00:22:55.762 [2024-07-15 07:58:40.401446] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:55.762 [2024-07-15 07:58:40.401452] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:55.762 [2024-07-15 07:58:40.401455] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401458] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbb6ec0): datao=0, datal=4096, cccid=4 00:22:55.762 [2024-07-15 07:58:40.401462] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc3a440) on tqpair(0xbb6ec0): expected_datao=0, payload_size=4096 00:22:55.762 [2024-07-15 07:58:40.401465] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401475] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401478] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401485] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.762 [2024-07-15 07:58:40.401490] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.762 [2024-07-15 07:58:40.401493] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401496] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a440) on tqpair=0xbb6ec0 00:22:55.762 [2024-07-15 07:58:40.401503] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:55.762 [2024-07-15 07:58:40.401510] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:55.762 [2024-07-15 07:58:40.401517] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:55.762 [2024-07-15 07:58:40.401523] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:55.762 [2024-07-15 07:58:40.401527] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:55.762 [2024-07-15 07:58:40.401531] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:55.762 [2024-07-15 07:58:40.401536] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:55.762 [2024-07-15 07:58:40.401540] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:55.762 [2024-07-15 07:58:40.401544] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:55.762 [2024-07-15 07:58:40.401557] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401560] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbb6ec0) 00:22:55.762 [2024-07-15 07:58:40.401566] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.762 [2024-07-15 07:58:40.401572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbb6ec0) 00:22:55.762 [2024-07-15 07:58:40.401583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.762 [2024-07-15 07:58:40.401595] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a440, cid 4, qid 0 00:22:55.762 [2024-07-15 07:58:40.401600] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a5c0, cid 5, qid 0 00:22:55.762 [2024-07-15 07:58:40.401675] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.762 [2024-07-15 07:58:40.401681] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.762 [2024-07-15 07:58:40.401684] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401687] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a440) on tqpair=0xbb6ec0 00:22:55.762 [2024-07-15 07:58:40.401693] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.762 [2024-07-15 07:58:40.401698] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.762 [2024-07-15 07:58:40.401701] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401704] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a5c0) on tqpair=0xbb6ec0 00:22:55.762 [2024-07-15 07:58:40.401711] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401716] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbb6ec0) 00:22:55.762 [2024-07-15 07:58:40.401722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.762 [2024-07-15 07:58:40.401731] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a5c0, cid 5, qid 0 00:22:55.762 [2024-07-15 07:58:40.401801] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.762 [2024-07-15 07:58:40.401807] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.762 [2024-07-15 07:58:40.401810] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401813] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a5c0) on tqpair=0xbb6ec0 00:22:55.762 [2024-07-15 07:58:40.401821] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401824] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbb6ec0) 00:22:55.762 [2024-07-15 07:58:40.401830] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.762 [2024-07-15 07:58:40.401839] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a5c0, cid 5, qid 0 00:22:55.762 [2024-07-15 07:58:40.401905] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.762 [2024-07-15 07:58:40.401911] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.762 [2024-07-15 07:58:40.401914] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401917] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a5c0) on tqpair=0xbb6ec0 00:22:55.762 [2024-07-15 07:58:40.401924] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.762 [2024-07-15 07:58:40.401928] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbb6ec0) 00:22:55.762 [2024-07-15 07:58:40.401934] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.763 [2024-07-15 07:58:40.401942] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a5c0, cid 5, qid 0 00:22:55.763 [2024-07-15 07:58:40.402012] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.763 [2024-07-15 07:58:40.402017] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.763 [2024-07-15 07:58:40.402020] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.763 [2024-07-15 07:58:40.402023] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a5c0) on tqpair=0xbb6ec0 00:22:55.763 [2024-07-15 07:58:40.402036] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.763 [2024-07-15 07:58:40.402040] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbb6ec0) 00:22:55.763 [2024-07-15 07:58:40.402045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.763 [2024-07-15 07:58:40.402051] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.763 [2024-07-15 07:58:40.402054] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbb6ec0) 00:22:55.763 [2024-07-15 07:58:40.402059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.763 [2024-07-15 07:58:40.402065] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.763 [2024-07-15 07:58:40.402069] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xbb6ec0) 00:22:55.763 [2024-07-15 07:58:40.402074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.763 [2024-07-15 07:58:40.402080] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.763 [2024-07-15 07:58:40.402083] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xbb6ec0) 00:22:55.763 [2024-07-15 07:58:40.402090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.763 [2024-07-15 07:58:40.402100] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a5c0, cid 5, qid 0 00:22:55.763 [2024-07-15 07:58:40.402105] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a440, cid 4, qid 0 00:22:55.763 [2024-07-15 07:58:40.402109] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a740, cid 6, qid 0 00:22:55.763 [2024-07-15 07:58:40.402113] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a8c0, cid 7, qid 0 00:22:55.763 [2024-07-15 07:58:40.406236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:55.763 [2024-07-15 07:58:40.406243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:55.763 [2024-07-15 07:58:40.406246] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:55.763 [2024-07-15 07:58:40.406249] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbb6ec0): datao=0, datal=8192, cccid=5 00:22:55.763 [2024-07-15 07:58:40.406253] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc3a5c0) on tqpair(0xbb6ec0): expected_datao=0, payload_size=8192 00:22:55.763 [2024-07-15 07:58:40.406256] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.763 [2024-07-15 07:58:40.406263] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:55.763 [2024-07-15 07:58:40.406266] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:55.763 [2024-07-15 07:58:40.406270] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:55.763 [2024-07-15 07:58:40.406275] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:55.763 [2024-07-15 07:58:40.406278] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:55.763 [2024-07-15 07:58:40.406281] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbb6ec0): datao=0, datal=512, cccid=4 00:22:55.763 [2024-07-15 07:58:40.406285] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc3a440) on tqpair(0xbb6ec0): expected_datao=0, payload_size=512 00:22:55.763 [2024-07-15 07:58:40.406288] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.763 [2024-07-15 07:58:40.406294] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:55.763 [2024-07-15 07:58:40.406296] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:55.763 [2024-07-15 07:58:40.406301] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:55.763 [2024-07-15 07:58:40.406306] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:55.763 [2024-07-15 07:58:40.406309] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:55.763 [2024-07-15 07:58:40.406312] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbb6ec0): datao=0, datal=512, cccid=6 00:22:55.763 [2024-07-15 07:58:40.406315] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc3a740) on tqpair(0xbb6ec0): expected_datao=0, payload_size=512 00:22:55.763 [2024-07-15 07:58:40.406319] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.763 [2024-07-15 07:58:40.406324] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:55.763 [2024-07-15 07:58:40.406327] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:55.763 [2024-07-15 07:58:40.406332] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:55.763 [2024-07-15 07:58:40.406336] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:55.763 [2024-07-15 07:58:40.406339] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:55.763 [2024-07-15 07:58:40.406342] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbb6ec0): datao=0, datal=4096, cccid=7 00:22:55.763 [2024-07-15 07:58:40.406346] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc3a8c0) on tqpair(0xbb6ec0): expected_datao=0, payload_size=4096 00:22:55.763 [2024-07-15 07:58:40.406349] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.763 [2024-07-15 07:58:40.406355] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:55.763 [2024-07-15 07:58:40.406360] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:55.763 [2024-07-15 07:58:40.406364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.763 [2024-07-15 07:58:40.406369] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.763 [2024-07-15 07:58:40.406372] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.763 [2024-07-15 07:58:40.406375] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a5c0) on tqpair=0xbb6ec0 00:22:55.763 [2024-07-15 07:58:40.406385] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.763 [2024-07-15 07:58:40.406390] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.763 [2024-07-15 07:58:40.406393] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.763 [2024-07-15 07:58:40.406396] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a440) on tqpair=0xbb6ec0 00:22:55.763 [2024-07-15 07:58:40.406404] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.763 [2024-07-15 07:58:40.406409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.763 [2024-07-15 07:58:40.406412] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.763 [2024-07-15 07:58:40.406415] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a740) on tqpair=0xbb6ec0 00:22:55.763 [2024-07-15 07:58:40.406421] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.763 [2024-07-15 07:58:40.406426] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.763 [2024-07-15 07:58:40.406428] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.763 [2024-07-15 07:58:40.406432] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a8c0) on tqpair=0xbb6ec0 00:22:55.763 ===================================================== 00:22:55.763 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:55.763 ===================================================== 00:22:55.763 Controller Capabilities/Features 00:22:55.763 ================================ 00:22:55.763 Vendor ID: 8086 00:22:55.763 Subsystem Vendor ID: 8086 00:22:55.763 Serial Number: SPDK00000000000001 00:22:55.763 Model Number: SPDK bdev Controller 00:22:55.763 Firmware Version: 24.09 00:22:55.763 Recommended Arb Burst: 6 00:22:55.763 IEEE OUI Identifier: e4 d2 5c 00:22:55.763 Multi-path I/O 00:22:55.763 May have multiple subsystem ports: Yes 00:22:55.763 May have multiple controllers: Yes 00:22:55.763 Associated with SR-IOV VF: No 00:22:55.763 Max Data Transfer Size: 131072 00:22:55.763 Max Number of Namespaces: 32 00:22:55.763 Max Number of I/O Queues: 127 00:22:55.763 NVMe Specification Version (VS): 1.3 00:22:55.763 NVMe Specification Version (Identify): 1.3 00:22:55.763 Maximum Queue Entries: 128 00:22:55.763 Contiguous Queues Required: Yes 00:22:55.763 Arbitration Mechanisms Supported 00:22:55.763 Weighted Round Robin: Not Supported 00:22:55.763 Vendor Specific: Not Supported 00:22:55.763 Reset Timeout: 15000 ms 00:22:55.763 Doorbell Stride: 4 bytes 00:22:55.763 NVM Subsystem Reset: Not Supported 00:22:55.763 Command Sets Supported 00:22:55.763 NVM Command Set: Supported 00:22:55.763 Boot Partition: Not Supported 00:22:55.763 Memory Page Size Minimum: 4096 bytes 00:22:55.763 Memory Page Size Maximum: 4096 bytes 00:22:55.763 Persistent Memory Region: Not Supported 00:22:55.763 Optional Asynchronous Events Supported 00:22:55.763 Namespace Attribute Notices: Supported 00:22:55.763 Firmware Activation Notices: Not Supported 00:22:55.763 ANA Change Notices: Not Supported 00:22:55.763 PLE Aggregate Log Change Notices: Not Supported 00:22:55.763 LBA Status Info Alert Notices: Not Supported 00:22:55.763 EGE Aggregate Log Change Notices: Not Supported 00:22:55.763 Normal NVM Subsystem Shutdown event: Not Supported 00:22:55.763 Zone Descriptor Change Notices: Not Supported 00:22:55.763 Discovery Log Change Notices: Not Supported 00:22:55.763 Controller Attributes 00:22:55.763 128-bit Host Identifier: Supported 00:22:55.763 Non-Operational Permissive Mode: Not Supported 00:22:55.763 NVM Sets: Not Supported 00:22:55.763 Read Recovery Levels: Not Supported 00:22:55.763 Endurance Groups: Not Supported 00:22:55.763 Predictable Latency Mode: Not Supported 00:22:55.763 Traffic Based Keep ALive: Not Supported 00:22:55.763 Namespace Granularity: Not Supported 00:22:55.763 SQ Associations: Not Supported 00:22:55.763 UUID List: Not Supported 00:22:55.763 Multi-Domain Subsystem: Not Supported 00:22:55.763 Fixed Capacity Management: Not Supported 00:22:55.763 Variable Capacity Management: Not Supported 00:22:55.763 Delete Endurance Group: Not Supported 00:22:55.763 Delete NVM Set: Not Supported 00:22:55.763 Extended LBA Formats Supported: Not Supported 00:22:55.763 Flexible Data Placement Supported: Not Supported 00:22:55.763 00:22:55.763 Controller Memory Buffer Support 00:22:55.763 ================================ 00:22:55.763 Supported: No 00:22:55.763 00:22:55.763 Persistent Memory Region Support 00:22:55.763 ================================ 00:22:55.763 Supported: No 00:22:55.763 00:22:55.763 Admin Command Set Attributes 00:22:55.763 ============================ 00:22:55.763 Security Send/Receive: Not Supported 00:22:55.763 Format NVM: Not Supported 00:22:55.763 Firmware Activate/Download: Not Supported 00:22:55.763 Namespace Management: Not Supported 00:22:55.763 Device Self-Test: Not Supported 00:22:55.763 Directives: Not Supported 00:22:55.763 NVMe-MI: Not Supported 00:22:55.763 Virtualization Management: Not Supported 00:22:55.764 Doorbell Buffer Config: Not Supported 00:22:55.764 Get LBA Status Capability: Not Supported 00:22:55.764 Command & Feature Lockdown Capability: Not Supported 00:22:55.764 Abort Command Limit: 4 00:22:55.764 Async Event Request Limit: 4 00:22:55.764 Number of Firmware Slots: N/A 00:22:55.764 Firmware Slot 1 Read-Only: N/A 00:22:55.764 Firmware Activation Without Reset: N/A 00:22:55.764 Multiple Update Detection Support: N/A 00:22:55.764 Firmware Update Granularity: No Information Provided 00:22:55.764 Per-Namespace SMART Log: No 00:22:55.764 Asymmetric Namespace Access Log Page: Not Supported 00:22:55.764 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:55.764 Command Effects Log Page: Supported 00:22:55.764 Get Log Page Extended Data: Supported 00:22:55.764 Telemetry Log Pages: Not Supported 00:22:55.764 Persistent Event Log Pages: Not Supported 00:22:55.764 Supported Log Pages Log Page: May Support 00:22:55.764 Commands Supported & Effects Log Page: Not Supported 00:22:55.764 Feature Identifiers & Effects Log Page:May Support 00:22:55.764 NVMe-MI Commands & Effects Log Page: May Support 00:22:55.764 Data Area 4 for Telemetry Log: Not Supported 00:22:55.764 Error Log Page Entries Supported: 128 00:22:55.764 Keep Alive: Supported 00:22:55.764 Keep Alive Granularity: 10000 ms 00:22:55.764 00:22:55.764 NVM Command Set Attributes 00:22:55.764 ========================== 00:22:55.764 Submission Queue Entry Size 00:22:55.764 Max: 64 00:22:55.764 Min: 64 00:22:55.764 Completion Queue Entry Size 00:22:55.764 Max: 16 00:22:55.764 Min: 16 00:22:55.764 Number of Namespaces: 32 00:22:55.764 Compare Command: Supported 00:22:55.764 Write Uncorrectable Command: Not Supported 00:22:55.764 Dataset Management Command: Supported 00:22:55.764 Write Zeroes Command: Supported 00:22:55.764 Set Features Save Field: Not Supported 00:22:55.764 Reservations: Supported 00:22:55.764 Timestamp: Not Supported 00:22:55.764 Copy: Supported 00:22:55.764 Volatile Write Cache: Present 00:22:55.764 Atomic Write Unit (Normal): 1 00:22:55.764 Atomic Write Unit (PFail): 1 00:22:55.764 Atomic Compare & Write Unit: 1 00:22:55.764 Fused Compare & Write: Supported 00:22:55.764 Scatter-Gather List 00:22:55.764 SGL Command Set: Supported 00:22:55.764 SGL Keyed: Supported 00:22:55.764 SGL Bit Bucket Descriptor: Not Supported 00:22:55.764 SGL Metadata Pointer: Not Supported 00:22:55.764 Oversized SGL: Not Supported 00:22:55.764 SGL Metadata Address: Not Supported 00:22:55.764 SGL Offset: Supported 00:22:55.764 Transport SGL Data Block: Not Supported 00:22:55.764 Replay Protected Memory Block: Not Supported 00:22:55.764 00:22:55.764 Firmware Slot Information 00:22:55.764 ========================= 00:22:55.764 Active slot: 1 00:22:55.764 Slot 1 Firmware Revision: 24.09 00:22:55.764 00:22:55.764 00:22:55.764 Commands Supported and Effects 00:22:55.764 ============================== 00:22:55.764 Admin Commands 00:22:55.764 -------------- 00:22:55.764 Get Log Page (02h): Supported 00:22:55.764 Identify (06h): Supported 00:22:55.764 Abort (08h): Supported 00:22:55.764 Set Features (09h): Supported 00:22:55.764 Get Features (0Ah): Supported 00:22:55.764 Asynchronous Event Request (0Ch): Supported 00:22:55.764 Keep Alive (18h): Supported 00:22:55.764 I/O Commands 00:22:55.764 ------------ 00:22:55.764 Flush (00h): Supported LBA-Change 00:22:55.764 Write (01h): Supported LBA-Change 00:22:55.764 Read (02h): Supported 00:22:55.764 Compare (05h): Supported 00:22:55.764 Write Zeroes (08h): Supported LBA-Change 00:22:55.764 Dataset Management (09h): Supported LBA-Change 00:22:55.764 Copy (19h): Supported LBA-Change 00:22:55.764 00:22:55.764 Error Log 00:22:55.764 ========= 00:22:55.764 00:22:55.764 Arbitration 00:22:55.764 =========== 00:22:55.764 Arbitration Burst: 1 00:22:55.764 00:22:55.764 Power Management 00:22:55.764 ================ 00:22:55.764 Number of Power States: 1 00:22:55.764 Current Power State: Power State #0 00:22:55.764 Power State #0: 00:22:55.764 Max Power: 0.00 W 00:22:55.764 Non-Operational State: Operational 00:22:55.764 Entry Latency: Not Reported 00:22:55.764 Exit Latency: Not Reported 00:22:55.764 Relative Read Throughput: 0 00:22:55.764 Relative Read Latency: 0 00:22:55.764 Relative Write Throughput: 0 00:22:55.764 Relative Write Latency: 0 00:22:55.764 Idle Power: Not Reported 00:22:55.764 Active Power: Not Reported 00:22:55.764 Non-Operational Permissive Mode: Not Supported 00:22:55.764 00:22:55.764 Health Information 00:22:55.764 ================== 00:22:55.764 Critical Warnings: 00:22:55.764 Available Spare Space: OK 00:22:55.764 Temperature: OK 00:22:55.764 Device Reliability: OK 00:22:55.764 Read Only: No 00:22:55.764 Volatile Memory Backup: OK 00:22:55.764 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:55.764 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:55.764 Available Spare: 0% 00:22:55.764 Available Spare Threshold: 0% 00:22:55.764 Life Percentage Used:[2024-07-15 07:58:40.406515] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.764 [2024-07-15 07:58:40.406519] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xbb6ec0) 00:22:55.764 [2024-07-15 07:58:40.406525] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.764 [2024-07-15 07:58:40.406537] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a8c0, cid 7, qid 0 00:22:55.764 [2024-07-15 07:58:40.406728] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.764 [2024-07-15 07:58:40.406733] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.764 [2024-07-15 07:58:40.406736] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.764 [2024-07-15 07:58:40.406739] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a8c0) on tqpair=0xbb6ec0 00:22:55.764 [2024-07-15 07:58:40.406768] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:55.764 [2024-07-15 07:58:40.406777] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc39e40) on tqpair=0xbb6ec0 00:22:55.764 [2024-07-15 07:58:40.406783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.764 [2024-07-15 07:58:40.406787] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc39fc0) on tqpair=0xbb6ec0 00:22:55.764 [2024-07-15 07:58:40.406791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.764 [2024-07-15 07:58:40.406795] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a140) on tqpair=0xbb6ec0 00:22:55.764 [2024-07-15 07:58:40.406799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.764 [2024-07-15 07:58:40.406803] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.764 [2024-07-15 07:58:40.406807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.764 [2024-07-15 07:58:40.406813] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.764 [2024-07-15 07:58:40.406816] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.764 [2024-07-15 07:58:40.406819] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.764 [2024-07-15 07:58:40.406826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.764 [2024-07-15 07:58:40.406837] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.764 [2024-07-15 07:58:40.406906] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.764 [2024-07-15 07:58:40.406912] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.764 [2024-07-15 07:58:40.406915] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.764 [2024-07-15 07:58:40.406918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.764 [2024-07-15 07:58:40.406924] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.764 [2024-07-15 07:58:40.406927] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.764 [2024-07-15 07:58:40.406930] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.764 [2024-07-15 07:58:40.406935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.764 [2024-07-15 07:58:40.406947] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.764 [2024-07-15 07:58:40.407023] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.764 [2024-07-15 07:58:40.407028] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.764 [2024-07-15 07:58:40.407031] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.764 [2024-07-15 07:58:40.407034] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.764 [2024-07-15 07:58:40.407038] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:55.764 [2024-07-15 07:58:40.407042] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:55.764 [2024-07-15 07:58:40.407049] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.764 [2024-07-15 07:58:40.407053] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.764 [2024-07-15 07:58:40.407056] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.764 [2024-07-15 07:58:40.407062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.764 [2024-07-15 07:58:40.407071] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.764 [2024-07-15 07:58:40.407136] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.764 [2024-07-15 07:58:40.407141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.764 [2024-07-15 07:58:40.407145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.764 [2024-07-15 07:58:40.407148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.764 [2024-07-15 07:58:40.407156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.764 [2024-07-15 07:58:40.407159] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.764 [2024-07-15 07:58:40.407162] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.764 [2024-07-15 07:58:40.407168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.764 [2024-07-15 07:58:40.407177] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.765 [2024-07-15 07:58:40.407256] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.765 [2024-07-15 07:58:40.407262] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.765 [2024-07-15 07:58:40.407265] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.407268] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.765 [2024-07-15 07:58:40.407276] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.407282] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.407285] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.765 [2024-07-15 07:58:40.407290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.765 [2024-07-15 07:58:40.407300] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.765 [2024-07-15 07:58:40.407373] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.765 [2024-07-15 07:58:40.407378] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.765 [2024-07-15 07:58:40.407381] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.407385] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.765 [2024-07-15 07:58:40.407392] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.407396] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.407399] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.765 [2024-07-15 07:58:40.407404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.765 [2024-07-15 07:58:40.407413] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.765 [2024-07-15 07:58:40.407493] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.765 [2024-07-15 07:58:40.407498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.765 [2024-07-15 07:58:40.407501] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.407504] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.765 [2024-07-15 07:58:40.407513] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.407517] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.407520] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.765 [2024-07-15 07:58:40.407525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.765 [2024-07-15 07:58:40.407534] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.765 [2024-07-15 07:58:40.407602] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.765 [2024-07-15 07:58:40.407608] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.765 [2024-07-15 07:58:40.407611] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.407614] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.765 [2024-07-15 07:58:40.407622] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.407625] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.407628] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.765 [2024-07-15 07:58:40.407634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.765 [2024-07-15 07:58:40.407642] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.765 [2024-07-15 07:58:40.407706] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.765 [2024-07-15 07:58:40.407711] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.765 [2024-07-15 07:58:40.407714] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.407717] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.765 [2024-07-15 07:58:40.407725] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.407728] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.407733] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.765 [2024-07-15 07:58:40.407738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.765 [2024-07-15 07:58:40.407748] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.765 [2024-07-15 07:58:40.407826] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.765 [2024-07-15 07:58:40.407831] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.765 [2024-07-15 07:58:40.407834] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.407837] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.765 [2024-07-15 07:58:40.407846] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.407849] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.407852] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.765 [2024-07-15 07:58:40.407858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.765 [2024-07-15 07:58:40.407867] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.765 [2024-07-15 07:58:40.407940] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.765 [2024-07-15 07:58:40.407946] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.765 [2024-07-15 07:58:40.407949] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.407952] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.765 [2024-07-15 07:58:40.407960] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.407963] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.407966] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.765 [2024-07-15 07:58:40.407972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.765 [2024-07-15 07:58:40.407980] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.765 [2024-07-15 07:58:40.408043] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.765 [2024-07-15 07:58:40.408048] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.765 [2024-07-15 07:58:40.408051] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.408055] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.765 [2024-07-15 07:58:40.408063] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.408067] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.408070] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.765 [2024-07-15 07:58:40.408075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.765 [2024-07-15 07:58:40.408084] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.765 [2024-07-15 07:58:40.408153] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.765 [2024-07-15 07:58:40.408158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.765 [2024-07-15 07:58:40.408161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.408164] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.765 [2024-07-15 07:58:40.408173] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.408176] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.408179] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.765 [2024-07-15 07:58:40.408186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.765 [2024-07-15 07:58:40.408195] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.765 [2024-07-15 07:58:40.408269] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.765 [2024-07-15 07:58:40.408275] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.765 [2024-07-15 07:58:40.408278] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.408281] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.765 [2024-07-15 07:58:40.408289] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.408292] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.408295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.765 [2024-07-15 07:58:40.408301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.765 [2024-07-15 07:58:40.408310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.765 [2024-07-15 07:58:40.408388] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.765 [2024-07-15 07:58:40.408393] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.765 [2024-07-15 07:58:40.408396] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.408399] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.765 [2024-07-15 07:58:40.408408] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.408411] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.765 [2024-07-15 07:58:40.408414] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.765 [2024-07-15 07:58:40.408420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.766 [2024-07-15 07:58:40.408429] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.766 [2024-07-15 07:58:40.408495] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.766 [2024-07-15 07:58:40.408500] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.766 [2024-07-15 07:58:40.408503] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.408506] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.766 [2024-07-15 07:58:40.408514] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.408517] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.408520] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.766 [2024-07-15 07:58:40.408526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.766 [2024-07-15 07:58:40.408535] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.766 [2024-07-15 07:58:40.408600] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.766 [2024-07-15 07:58:40.408605] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.766 [2024-07-15 07:58:40.408608] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.408611] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.766 [2024-07-15 07:58:40.408619] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.408623] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.408626] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.766 [2024-07-15 07:58:40.408631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.766 [2024-07-15 07:58:40.408643] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.766 [2024-07-15 07:58:40.408717] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.766 [2024-07-15 07:58:40.408722] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.766 [2024-07-15 07:58:40.408725] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.408728] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.766 [2024-07-15 07:58:40.408737] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.408740] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.408743] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.766 [2024-07-15 07:58:40.408749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.766 [2024-07-15 07:58:40.408758] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.766 [2024-07-15 07:58:40.408834] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.766 [2024-07-15 07:58:40.408839] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.766 [2024-07-15 07:58:40.408842] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.408845] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.766 [2024-07-15 07:58:40.408854] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.408858] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.408861] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.766 [2024-07-15 07:58:40.408866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.766 [2024-07-15 07:58:40.408875] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.766 [2024-07-15 07:58:40.408941] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.766 [2024-07-15 07:58:40.408947] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.766 [2024-07-15 07:58:40.408950] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.408953] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.766 [2024-07-15 07:58:40.408961] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.408965] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.408968] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.766 [2024-07-15 07:58:40.408973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.766 [2024-07-15 07:58:40.408982] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.766 [2024-07-15 07:58:40.409049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.766 [2024-07-15 07:58:40.409054] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.766 [2024-07-15 07:58:40.409057] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.409060] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.766 [2024-07-15 07:58:40.409068] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.409072] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.409075] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.766 [2024-07-15 07:58:40.409080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.766 [2024-07-15 07:58:40.409089] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.766 [2024-07-15 07:58:40.409164] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.766 [2024-07-15 07:58:40.409170] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.766 [2024-07-15 07:58:40.409173] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.409176] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.766 [2024-07-15 07:58:40.409184] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.409187] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.409190] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.766 [2024-07-15 07:58:40.409195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.766 [2024-07-15 07:58:40.409204] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.766 [2024-07-15 07:58:40.409280] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.766 [2024-07-15 07:58:40.409286] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.766 [2024-07-15 07:58:40.409289] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.409292] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.766 [2024-07-15 07:58:40.409300] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.409304] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.409306] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.766 [2024-07-15 07:58:40.409312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.766 [2024-07-15 07:58:40.409321] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.766 [2024-07-15 07:58:40.409387] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.766 [2024-07-15 07:58:40.409392] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.766 [2024-07-15 07:58:40.409395] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.409399] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.766 [2024-07-15 07:58:40.409407] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.409410] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.409413] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.766 [2024-07-15 07:58:40.409419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.766 [2024-07-15 07:58:40.409428] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.766 [2024-07-15 07:58:40.409496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.766 [2024-07-15 07:58:40.409502] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.766 [2024-07-15 07:58:40.409504] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.409507] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.766 [2024-07-15 07:58:40.409516] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.409519] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.409522] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.766 [2024-07-15 07:58:40.409528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.766 [2024-07-15 07:58:40.409537] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.766 [2024-07-15 07:58:40.409603] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.766 [2024-07-15 07:58:40.409610] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.766 [2024-07-15 07:58:40.409613] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.409616] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.766 [2024-07-15 07:58:40.409624] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.409627] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.409630] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.766 [2024-07-15 07:58:40.409636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.766 [2024-07-15 07:58:40.409644] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.766 [2024-07-15 07:58:40.409717] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.766 [2024-07-15 07:58:40.409722] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.766 [2024-07-15 07:58:40.409725] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.409728] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.766 [2024-07-15 07:58:40.409737] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.409740] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.409743] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.766 [2024-07-15 07:58:40.409748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.766 [2024-07-15 07:58:40.409758] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.766 [2024-07-15 07:58:40.409821] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.766 [2024-07-15 07:58:40.409827] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.766 [2024-07-15 07:58:40.409830] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.409833] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.766 [2024-07-15 07:58:40.409841] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.409844] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.766 [2024-07-15 07:58:40.409847] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.767 [2024-07-15 07:58:40.409853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.767 [2024-07-15 07:58:40.409861] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.767 [2024-07-15 07:58:40.409934] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.767 [2024-07-15 07:58:40.409939] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.767 [2024-07-15 07:58:40.409942] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.767 [2024-07-15 07:58:40.409945] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.767 [2024-07-15 07:58:40.409954] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.767 [2024-07-15 07:58:40.409957] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.767 [2024-07-15 07:58:40.409960] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.767 [2024-07-15 07:58:40.409965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.767 [2024-07-15 07:58:40.409974] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.767 [2024-07-15 07:58:40.410042] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.767 [2024-07-15 07:58:40.410047] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.767 [2024-07-15 07:58:40.410051] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.767 [2024-07-15 07:58:40.410055] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.767 [2024-07-15 07:58:40.410063] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.767 [2024-07-15 07:58:40.410067] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.767 [2024-07-15 07:58:40.410070] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.767 [2024-07-15 07:58:40.410075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.767 [2024-07-15 07:58:40.410084] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.767 [2024-07-15 07:58:40.410153] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.767 [2024-07-15 07:58:40.410158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.767 [2024-07-15 07:58:40.410161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.767 [2024-07-15 07:58:40.410164] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.767 [2024-07-15 07:58:40.410173] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.767 [2024-07-15 07:58:40.410176] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.767 [2024-07-15 07:58:40.410179] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.767 [2024-07-15 07:58:40.410185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.767 [2024-07-15 07:58:40.410194] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.767 [2024-07-15 07:58:40.414234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.767 [2024-07-15 07:58:40.414242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.767 [2024-07-15 07:58:40.414245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.767 [2024-07-15 07:58:40.414248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.767 [2024-07-15 07:58:40.414257] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:55.767 [2024-07-15 07:58:40.414261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:55.767 [2024-07-15 07:58:40.414264] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbb6ec0) 00:22:55.767 [2024-07-15 07:58:40.414269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.767 [2024-07-15 07:58:40.414280] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc3a2c0, cid 3, qid 0 00:22:55.767 [2024-07-15 07:58:40.414441] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:55.767 [2024-07-15 07:58:40.414446] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:55.767 [2024-07-15 07:58:40.414449] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:55.767 [2024-07-15 07:58:40.414453] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc3a2c0) on tqpair=0xbb6ec0 00:22:55.767 [2024-07-15 07:58:40.414459] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:22:55.767 0% 00:22:55.767 Data Units Read: 0 00:22:55.767 Data Units Written: 0 00:22:55.767 Host Read Commands: 0 00:22:55.767 Host Write Commands: 0 00:22:55.767 Controller Busy Time: 0 minutes 00:22:55.767 Power Cycles: 0 00:22:55.767 Power On Hours: 0 hours 00:22:55.767 Unsafe Shutdowns: 0 00:22:55.767 Unrecoverable Media Errors: 0 00:22:55.767 Lifetime Error Log Entries: 0 00:22:55.767 Warning Temperature Time: 0 minutes 00:22:55.767 Critical Temperature Time: 0 minutes 00:22:55.767 00:22:55.767 Number of Queues 00:22:55.767 ================ 00:22:55.767 Number of I/O Submission Queues: 127 00:22:55.767 Number of I/O Completion Queues: 127 00:22:55.767 00:22:55.767 Active Namespaces 00:22:55.767 ================= 00:22:55.767 Namespace ID:1 00:22:55.767 Error Recovery Timeout: Unlimited 00:22:55.767 Command Set Identifier: NVM (00h) 00:22:55.767 Deallocate: Supported 00:22:55.767 Deallocated/Unwritten Error: Not Supported 00:22:55.767 Deallocated Read Value: Unknown 00:22:55.767 Deallocate in Write Zeroes: Not Supported 00:22:55.767 Deallocated Guard Field: 0xFFFF 00:22:55.767 Flush: Supported 00:22:55.767 Reservation: Supported 00:22:55.767 Namespace Sharing Capabilities: Multiple Controllers 00:22:55.767 Size (in LBAs): 131072 (0GiB) 00:22:55.767 Capacity (in LBAs): 131072 (0GiB) 00:22:55.767 Utilization (in LBAs): 131072 (0GiB) 00:22:55.767 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:55.767 EUI64: ABCDEF0123456789 00:22:55.767 UUID: 2dfc2daa-5cee-408a-88db-87fb8cb1ace1 00:22:55.767 Thin Provisioning: Not Supported 00:22:55.767 Per-NS Atomic Units: Yes 00:22:55.767 Atomic Boundary Size (Normal): 0 00:22:55.767 Atomic Boundary Size (PFail): 0 00:22:55.767 Atomic Boundary Offset: 0 00:22:55.767 Maximum Single Source Range Length: 65535 00:22:55.767 Maximum Copy Length: 65535 00:22:55.767 Maximum Source Range Count: 1 00:22:55.767 NGUID/EUI64 Never Reused: No 00:22:55.767 Namespace Write Protected: No 00:22:55.767 Number of LBA Formats: 1 00:22:55.767 Current LBA Format: LBA Format #00 00:22:55.767 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:55.767 00:22:55.767 07:58:40 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:55.767 07:58:40 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:55.767 07:58:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.767 07:58:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:55.767 07:58:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.767 07:58:40 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:55.767 07:58:40 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:55.767 07:58:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:55.767 07:58:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:55.767 07:58:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:55.767 07:58:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:55.767 07:58:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:55.767 07:58:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:55.767 rmmod nvme_tcp 00:22:55.767 rmmod nvme_fabrics 00:22:55.767 rmmod nvme_keyring 00:22:55.767 07:58:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:55.767 07:58:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:55.767 07:58:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:55.767 07:58:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3334865 ']' 00:22:55.767 07:58:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3334865 00:22:55.767 07:58:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 3334865 ']' 00:22:55.767 07:58:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 3334865 00:22:55.767 07:58:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:22:55.767 07:58:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:55.767 07:58:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3334865 00:22:56.027 07:58:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:56.027 07:58:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:56.027 07:58:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3334865' 00:22:56.027 killing process with pid 3334865 00:22:56.027 07:58:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 3334865 00:22:56.027 07:58:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 3334865 00:22:56.027 07:58:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:56.027 07:58:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:56.027 07:58:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:56.027 07:58:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:56.027 07:58:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:56.027 07:58:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.027 07:58:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:56.027 07:58:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.577 07:58:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:58.577 00:22:58.577 real 0m9.681s 00:22:58.577 user 0m7.684s 00:22:58.577 sys 0m4.801s 00:22:58.577 07:58:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:58.577 07:58:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:58.577 ************************************ 00:22:58.577 END TEST nvmf_identify 00:22:58.577 ************************************ 00:22:58.577 07:58:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:58.577 07:58:42 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:58.577 07:58:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:58.577 07:58:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:58.577 07:58:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:58.577 ************************************ 00:22:58.577 START TEST nvmf_perf 00:22:58.577 ************************************ 00:22:58.577 07:58:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:58.577 * Looking for test storage... 00:22:58.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:58.577 07:58:42 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:58.577 07:58:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:58.577 07:58:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:58.577 07:58:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:58.577 07:58:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:58.577 07:58:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:58.577 07:58:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:58.577 07:58:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:58.577 07:58:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:58.577 07:58:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:58.577 07:58:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:58.577 07:58:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:58.577 07:58:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:58.578 07:58:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:58.578 07:58:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:58.578 07:58:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:58.578 07:58:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:58.578 07:58:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:58.578 07:58:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:58.578 07:58:43 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:58.578 07:58:43 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:58.578 07:58:43 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:58.578 07:58:43 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.578 07:58:43 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.578 07:58:43 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.578 07:58:43 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:58.578 07:58:43 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.578 07:58:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:58.578 07:58:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:58.578 07:58:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:58.578 07:58:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:58.578 07:58:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:58.578 07:58:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:58.578 07:58:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:58.578 07:58:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:58.578 07:58:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:58.578 07:58:43 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:58.578 07:58:43 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:58.579 07:58:43 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:58.579 07:58:43 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:58.579 07:58:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:58.579 07:58:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:58.579 07:58:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:58.579 07:58:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:58.579 07:58:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:58.579 07:58:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.579 07:58:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:58.579 07:58:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.579 07:58:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:58.579 07:58:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:58.579 07:58:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:58.579 07:58:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:03.858 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:03.859 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:03.859 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:03.859 Found net devices under 0000:86:00.0: cvl_0_0 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:03.859 Found net devices under 0000:86:00.1: cvl_0_1 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:03.859 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:04.118 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:04.118 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:04.118 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:04.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:04.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:23:04.118 00:23:04.118 --- 10.0.0.2 ping statistics --- 00:23:04.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.118 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:23:04.118 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:04.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:04.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:23:04.118 00:23:04.118 --- 10.0.0.1 ping statistics --- 00:23:04.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.118 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:23:04.118 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:04.118 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:23:04.118 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:04.118 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:04.118 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:04.118 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:04.118 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:04.118 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:04.118 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:04.118 07:58:48 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:04.118 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:04.118 07:58:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:04.118 07:58:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:04.118 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3338584 00:23:04.118 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3338584 00:23:04.118 07:58:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:04.118 07:58:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 3338584 ']' 00:23:04.118 07:58:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.118 07:58:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:04.119 07:58:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.119 07:58:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:04.119 07:58:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:04.119 [2024-07-15 07:58:48.787397] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:23:04.119 [2024-07-15 07:58:48.787440] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.119 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.119 [2024-07-15 07:58:48.858522] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:04.377 [2024-07-15 07:58:48.933260] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:04.377 [2024-07-15 07:58:48.933303] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:04.377 [2024-07-15 07:58:48.933309] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:04.377 [2024-07-15 07:58:48.933316] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:04.377 [2024-07-15 07:58:48.933320] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:04.378 [2024-07-15 07:58:48.933382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.378 [2024-07-15 07:58:48.933413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.378 [2024-07-15 07:58:48.933516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.378 [2024-07-15 07:58:48.933517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:05.019 07:58:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:05.019 07:58:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:23:05.019 07:58:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:05.019 07:58:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:05.019 07:58:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:05.019 07:58:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.019 07:58:49 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:05.019 07:58:49 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:08.327 07:58:52 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:08.327 07:58:52 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:08.327 07:58:52 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:23:08.327 07:58:52 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:08.327 07:58:53 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:08.327 07:58:53 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:23:08.327 07:58:53 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:08.327 07:58:53 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:08.327 07:58:53 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:08.586 [2024-07-15 07:58:53.208570] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.586 07:58:53 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:08.843 07:58:53 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:08.843 07:58:53 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:09.101 07:58:53 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:09.101 07:58:53 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:09.101 07:58:53 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:09.359 [2024-07-15 07:58:53.943327] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.359 07:58:53 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:09.617 07:58:54 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:23:09.617 07:58:54 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:09.617 07:58:54 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:09.617 07:58:54 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:10.993 Initializing NVMe Controllers 00:23:10.993 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:23:10.993 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:23:10.993 Initialization complete. Launching workers. 00:23:10.993 ======================================================== 00:23:10.993 Latency(us) 00:23:10.993 Device Information : IOPS MiB/s Average min max 00:23:10.993 PCIE (0000:5e:00.0) NSID 1 from core 0: 98327.02 384.09 324.93 34.85 4255.99 00:23:10.993 ======================================================== 00:23:10.993 Total : 98327.02 384.09 324.93 34.85 4255.99 00:23:10.993 00:23:10.993 07:58:55 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:10.993 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.370 Initializing NVMe Controllers 00:23:12.370 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:12.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:12.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:12.370 Initialization complete. Launching workers. 00:23:12.370 ======================================================== 00:23:12.370 Latency(us) 00:23:12.370 Device Information : IOPS MiB/s Average min max 00:23:12.370 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 204.00 0.80 5085.29 123.28 44922.96 00:23:12.370 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 64.00 0.25 16071.86 7924.59 47903.49 00:23:12.370 ======================================================== 00:23:12.370 Total : 268.00 1.05 7708.95 123.28 47903.49 00:23:12.370 00:23:12.370 07:58:56 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:12.370 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.746 Initializing NVMe Controllers 00:23:13.746 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:13.746 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:13.746 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:13.746 Initialization complete. Launching workers. 00:23:13.746 ======================================================== 00:23:13.746 Latency(us) 00:23:13.746 Device Information : IOPS MiB/s Average min max 00:23:13.746 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11075.00 43.26 2889.30 496.26 6214.71 00:23:13.746 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3873.00 15.13 8299.04 5911.35 16141.63 00:23:13.747 ======================================================== 00:23:13.747 Total : 14948.00 58.39 4290.96 496.26 16141.63 00:23:13.747 00:23:13.747 07:58:58 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:13.747 07:58:58 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:13.747 07:58:58 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:13.747 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.297 Initializing NVMe Controllers 00:23:16.297 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:16.297 Controller IO queue size 128, less than required. 00:23:16.297 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:16.297 Controller IO queue size 128, less than required. 00:23:16.297 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:16.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:16.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:16.297 Initialization complete. Launching workers. 00:23:16.297 ======================================================== 00:23:16.297 Latency(us) 00:23:16.297 Device Information : IOPS MiB/s Average min max 00:23:16.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1823.96 455.99 70999.28 43945.50 96119.40 00:23:16.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 616.49 154.12 221035.43 85932.66 330688.03 00:23:16.297 ======================================================== 00:23:16.297 Total : 2440.45 610.11 108900.24 43945.50 330688.03 00:23:16.297 00:23:16.297 07:59:00 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:16.297 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.555 No valid NVMe controllers or AIO or URING devices found 00:23:16.555 Initializing NVMe Controllers 00:23:16.555 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:16.555 Controller IO queue size 128, less than required. 00:23:16.555 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:16.555 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:16.556 Controller IO queue size 128, less than required. 00:23:16.556 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:16.556 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:16.556 WARNING: Some requested NVMe devices were skipped 00:23:16.556 07:59:01 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:16.556 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.091 Initializing NVMe Controllers 00:23:19.091 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:19.091 Controller IO queue size 128, less than required. 00:23:19.091 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:19.091 Controller IO queue size 128, less than required. 00:23:19.091 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:19.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:19.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:19.091 Initialization complete. Launching workers. 00:23:19.091 00:23:19.091 ==================== 00:23:19.091 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:19.091 TCP transport: 00:23:19.091 polls: 17492 00:23:19.091 idle_polls: 13143 00:23:19.091 sock_completions: 4349 00:23:19.092 nvme_completions: 6479 00:23:19.092 submitted_requests: 9752 00:23:19.092 queued_requests: 1 00:23:19.092 00:23:19.092 ==================== 00:23:19.092 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:19.092 TCP transport: 00:23:19.092 polls: 13939 00:23:19.092 idle_polls: 7805 00:23:19.092 sock_completions: 6134 00:23:19.092 nvme_completions: 7377 00:23:19.092 submitted_requests: 11192 00:23:19.092 queued_requests: 1 00:23:19.092 ======================================================== 00:23:19.092 Latency(us) 00:23:19.092 Device Information : IOPS MiB/s Average min max 00:23:19.092 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1619.33 404.83 80115.71 61091.89 133255.67 00:23:19.092 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1843.80 460.95 69881.21 32657.74 101728.21 00:23:19.092 ======================================================== 00:23:19.092 Total : 3463.13 865.78 74666.76 32657.74 133255.67 00:23:19.092 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:19.092 rmmod nvme_tcp 00:23:19.092 rmmod nvme_fabrics 00:23:19.092 rmmod nvme_keyring 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3338584 ']' 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3338584 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 3338584 ']' 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 3338584 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3338584 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3338584' 00:23:19.092 killing process with pid 3338584 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 3338584 00:23:19.092 07:59:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 3338584 00:23:20.995 07:59:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:20.995 07:59:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:20.995 07:59:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:20.995 07:59:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:20.995 07:59:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:20.995 07:59:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.995 07:59:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:20.995 07:59:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.899 07:59:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:22.899 00:23:22.899 real 0m24.504s 00:23:22.899 user 1m5.347s 00:23:22.899 sys 0m7.726s 00:23:22.899 07:59:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:22.899 07:59:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:22.899 ************************************ 00:23:22.899 END TEST nvmf_perf 00:23:22.899 ************************************ 00:23:22.899 07:59:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:22.899 07:59:07 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:22.899 07:59:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:22.899 07:59:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:22.899 07:59:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:22.899 ************************************ 00:23:22.899 START TEST nvmf_fio_host 00:23:22.899 ************************************ 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:22.899 * Looking for test storage... 00:23:22.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.899 07:59:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:22.900 07:59:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.900 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:23:22.900 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:22.900 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:22.900 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.900 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.900 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.900 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:22.900 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:22.900 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:22.900 07:59:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:22.900 07:59:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:22.900 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:22.900 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.900 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:22.900 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:22.900 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:22.900 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.900 07:59:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.900 07:59:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.900 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:22.900 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:22.900 07:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:22.900 07:59:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.561 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:29.561 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:29.562 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:29.562 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:29.562 Found net devices under 0000:86:00.0: cvl_0_0 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:29.562 Found net devices under 0000:86:00.1: cvl_0_1 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:29.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:23:29.562 00:23:29.562 --- 10.0.0.2 ping statistics --- 00:23:29.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.562 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:29.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:23:29.562 00:23:29.562 --- 10.0.0.1 ping statistics --- 00:23:29.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.562 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3344733 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3344733 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 3344733 ']' 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:29.562 07:59:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.562 [2024-07-15 07:59:13.401470] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:23:29.562 [2024-07-15 07:59:13.401516] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.562 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.562 [2024-07-15 07:59:13.473233] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:29.562 [2024-07-15 07:59:13.553978] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.563 [2024-07-15 07:59:13.554012] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.563 [2024-07-15 07:59:13.554020] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.563 [2024-07-15 07:59:13.554026] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.563 [2024-07-15 07:59:13.554031] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.563 [2024-07-15 07:59:13.554082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.563 [2024-07-15 07:59:13.554189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.563 [2024-07-15 07:59:13.554214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.563 [2024-07-15 07:59:13.554215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:29.563 07:59:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:29.563 07:59:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:23:29.563 07:59:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:29.820 [2024-07-15 07:59:14.387706] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.820 07:59:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:29.820 07:59:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:29.820 07:59:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.820 07:59:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:30.078 Malloc1 00:23:30.078 07:59:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:30.078 07:59:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:30.335 07:59:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:30.592 [2024-07-15 07:59:15.153988] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.592 07:59:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:30.879 07:59:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:30.879 07:59:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:30.879 07:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:30.879 07:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:30.879 07:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:30.879 07:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:30.879 07:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:30.879 07:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:30.879 07:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:30.879 07:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:30.879 07:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:30.879 07:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:30.879 07:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:30.879 07:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:30.879 07:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:30.879 07:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:30.879 07:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:30.879 07:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:30.880 07:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:30.880 07:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:30.880 07:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:30.880 07:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:30.880 07:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:31.139 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:31.139 fio-3.35 00:23:31.139 Starting 1 thread 00:23:31.139 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.661 00:23:33.661 test: (groupid=0, jobs=1): err= 0: pid=3345242: Mon Jul 15 07:59:17 2024 00:23:33.661 read: IOPS=11.8k, BW=46.0MiB/s (48.2MB/s)(92.2MiB/2005msec) 00:23:33.661 slat (nsec): min=1604, max=244548, avg=1745.61, stdev=2218.35 00:23:33.661 clat (usec): min=3219, max=10501, avg=6001.17, stdev=453.38 00:23:33.661 lat (usec): min=3253, max=10502, avg=6002.91, stdev=453.28 00:23:33.661 clat percentiles (usec): 00:23:33.661 | 1.00th=[ 4948], 5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5604], 00:23:33.661 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 5997], 60.00th=[ 6128], 00:23:33.661 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6521], 95.00th=[ 6718], 00:23:33.661 | 99.00th=[ 6980], 99.50th=[ 7111], 99.90th=[ 8094], 99.95th=[ 8848], 00:23:33.661 | 99.99th=[10159] 00:23:33.661 bw ( KiB/s): min=46120, max=47776, per=99.99%, avg=47104.00, stdev=743.47, samples=4 00:23:33.661 iops : min=11530, max=11944, avg=11776.00, stdev=185.87, samples=4 00:23:33.661 write: IOPS=11.7k, BW=45.8MiB/s (48.0MB/s)(91.7MiB/2005msec); 0 zone resets 00:23:33.661 slat (nsec): min=1655, max=240901, avg=1821.52, stdev=1729.38 00:23:33.661 clat (usec): min=2482, max=9445, avg=4849.77, stdev=381.06 00:23:33.661 lat (usec): min=2498, max=9447, avg=4851.59, stdev=381.01 00:23:33.661 clat percentiles (usec): 00:23:33.661 | 1.00th=[ 4015], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4555], 00:23:33.661 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4948], 00:23:33.661 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5407], 00:23:33.661 | 99.00th=[ 5669], 99.50th=[ 5735], 99.90th=[ 8029], 99.95th=[ 9110], 00:23:33.661 | 99.99th=[ 9372] 00:23:33.661 bw ( KiB/s): min=46464, max=47296, per=99.97%, avg=46836.00, stdev=374.86, samples=4 00:23:33.661 iops : min=11616, max=11824, avg=11709.00, stdev=93.72, samples=4 00:23:33.661 lat (msec) : 4=0.52%, 10=99.47%, 20=0.01% 00:23:33.661 cpu : usr=74.10%, sys=24.35%, ctx=37, majf=0, minf=6 00:23:33.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:33.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:33.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:33.661 issued rwts: total=23613,23484,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:33.661 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:33.661 00:23:33.661 Run status group 0 (all jobs): 00:23:33.661 READ: bw=46.0MiB/s (48.2MB/s), 46.0MiB/s-46.0MiB/s (48.2MB/s-48.2MB/s), io=92.2MiB (96.7MB), run=2005-2005msec 00:23:33.661 WRITE: bw=45.8MiB/s (48.0MB/s), 45.8MiB/s-45.8MiB/s (48.0MB/s-48.0MB/s), io=91.7MiB (96.2MB), run=2005-2005msec 00:23:33.661 07:59:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:33.661 07:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:33.661 07:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:33.661 07:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:33.661 07:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:33.661 07:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:33.661 07:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:33.661 07:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:33.661 07:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:33.661 07:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:33.661 07:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:33.661 07:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:33.661 07:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:33.661 07:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:33.661 07:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:33.661 07:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:33.661 07:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:33.661 07:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:33.661 07:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:33.661 07:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:33.661 07:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:33.661 07:59:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:33.661 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:33.661 fio-3.35 00:23:33.661 Starting 1 thread 00:23:33.661 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.560 [2024-07-15 07:59:20.153391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2270 is same with the state(5) to be set 00:23:35.560 [2024-07-15 07:59:20.153448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2270 is same with the state(5) to be set 00:23:35.560 [2024-07-15 07:59:20.153456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2270 is same with the state(5) to be set 00:23:36.124 00:23:36.124 test: (groupid=0, jobs=1): err= 0: pid=3345726: Mon Jul 15 07:59:20 2024 00:23:36.124 read: IOPS=10.5k, BW=165MiB/s (173MB/s)(337MiB/2049msec) 00:23:36.124 slat (nsec): min=2623, max=86522, avg=2856.92, stdev=1302.39 00:23:36.124 clat (usec): min=2029, max=52605, avg=7016.51, stdev=3133.33 00:23:36.124 lat (usec): min=2031, max=52608, avg=7019.36, stdev=3133.41 00:23:36.124 clat percentiles (usec): 00:23:36.124 | 1.00th=[ 3589], 5.00th=[ 4359], 10.00th=[ 4817], 20.00th=[ 5407], 00:23:36.124 | 30.00th=[ 5866], 40.00th=[ 6325], 50.00th=[ 6783], 60.00th=[ 7308], 00:23:36.124 | 70.00th=[ 7701], 80.00th=[ 8094], 90.00th=[ 8979], 95.00th=[ 9765], 00:23:36.124 | 99.00th=[11994], 99.50th=[13042], 99.90th=[51643], 99.95th=[52167], 00:23:36.124 | 99.99th=[52691] 00:23:36.124 bw ( KiB/s): min=82112, max=97280, per=52.43%, avg=88408.00, stdev=6395.49, samples=4 00:23:36.124 iops : min= 5132, max= 6080, avg=5525.50, stdev=399.72, samples=4 00:23:36.124 write: IOPS=6287, BW=98.2MiB/s (103MB/s)(180MiB/1837msec); 0 zone resets 00:23:36.124 slat (usec): min=30, max=380, avg=32.09, stdev= 7.68 00:23:36.124 clat (usec): min=4860, max=55581, avg=8823.49, stdev=3128.48 00:23:36.124 lat (usec): min=4892, max=55612, avg=8855.58, stdev=3129.36 00:23:36.124 clat percentiles (usec): 00:23:36.124 | 1.00th=[ 5735], 5.00th=[ 6587], 10.00th=[ 6980], 20.00th=[ 7439], 00:23:36.124 | 30.00th=[ 7832], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8848], 00:23:36.124 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11469], 00:23:36.124 | 99.00th=[13435], 99.50th=[14877], 99.90th=[53740], 99.95th=[54789], 00:23:36.124 | 99.99th=[55313] 00:23:36.124 bw ( KiB/s): min=84992, max=101376, per=91.49%, avg=92048.00, stdev=6816.95, samples=4 00:23:36.124 iops : min= 5312, max= 6336, avg=5753.00, stdev=426.06, samples=4 00:23:36.124 lat (msec) : 4=1.46%, 10=89.42%, 20=8.73%, 50=0.10%, 100=0.29% 00:23:36.124 cpu : usr=86.82%, sys=12.40%, ctx=21, majf=0, minf=3 00:23:36.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:36.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:36.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:36.124 issued rwts: total=21592,11551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:36.124 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:36.125 00:23:36.125 Run status group 0 (all jobs): 00:23:36.125 READ: bw=165MiB/s (173MB/s), 165MiB/s-165MiB/s (173MB/s-173MB/s), io=337MiB (354MB), run=2049-2049msec 00:23:36.125 WRITE: bw=98.2MiB/s (103MB/s), 98.2MiB/s-98.2MiB/s (103MB/s-103MB/s), io=180MiB (189MB), run=1837-1837msec 00:23:36.125 07:59:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:36.125 07:59:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:36.125 07:59:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:36.125 07:59:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:36.125 07:59:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:36.125 07:59:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:36.125 07:59:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:23:36.125 07:59:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:36.125 07:59:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:23:36.125 07:59:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:36.125 07:59:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:36.125 rmmod nvme_tcp 00:23:36.125 rmmod nvme_fabrics 00:23:36.125 rmmod nvme_keyring 00:23:36.125 07:59:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:36.125 07:59:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:23:36.125 07:59:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:23:36.125 07:59:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3344733 ']' 00:23:36.125 07:59:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3344733 00:23:36.125 07:59:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 3344733 ']' 00:23:36.125 07:59:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 3344733 00:23:36.383 07:59:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:23:36.383 07:59:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:36.383 07:59:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3344733 00:23:36.383 07:59:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:36.383 07:59:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:36.383 07:59:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3344733' 00:23:36.383 killing process with pid 3344733 00:23:36.383 07:59:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 3344733 00:23:36.383 07:59:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 3344733 00:23:36.383 07:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:36.383 07:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:36.383 07:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:36.383 07:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:36.383 07:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:36.383 07:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.383 07:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.383 07:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.925 07:59:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:38.925 00:23:38.925 real 0m15.723s 00:23:38.925 user 0m47.394s 00:23:38.925 sys 0m6.261s 00:23:38.925 07:59:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:38.925 07:59:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.925 ************************************ 00:23:38.925 END TEST nvmf_fio_host 00:23:38.925 ************************************ 00:23:38.925 07:59:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:38.925 07:59:23 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:38.925 07:59:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:38.925 07:59:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:38.925 07:59:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:38.925 ************************************ 00:23:38.925 START TEST nvmf_failover 00:23:38.925 ************************************ 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:38.925 * Looking for test storage... 00:23:38.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:38.925 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:38.926 07:59:23 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:38.926 07:59:23 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:38.926 07:59:23 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:38.926 07:59:23 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:38.926 07:59:23 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:38.926 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:38.926 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.926 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:38.926 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:38.926 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:38.926 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.926 07:59:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.926 07:59:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.926 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:38.926 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:38.926 07:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:23:38.926 07:59:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:44.198 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:44.199 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:44.199 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:44.199 Found net devices under 0000:86:00.0: cvl_0_0 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:44.199 Found net devices under 0000:86:00.1: cvl_0_1 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:44.199 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:44.459 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:44.459 07:59:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:44.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:23:44.459 00:23:44.459 --- 10.0.0.2 ping statistics --- 00:23:44.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.459 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:44.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:23:44.459 00:23:44.459 --- 10.0.0.1 ping statistics --- 00:23:44.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.459 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3349649 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3349649 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3349649 ']' 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:44.459 07:59:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:44.459 [2024-07-15 07:59:29.207237] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:23:44.460 [2024-07-15 07:59:29.207279] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.719 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.719 [2024-07-15 07:59:29.278524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:44.719 [2024-07-15 07:59:29.357498] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.719 [2024-07-15 07:59:29.357532] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.719 [2024-07-15 07:59:29.357540] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.719 [2024-07-15 07:59:29.357550] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.719 [2024-07-15 07:59:29.357555] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.719 [2024-07-15 07:59:29.357619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.719 [2024-07-15 07:59:29.357724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.719 [2024-07-15 07:59:29.357725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:45.288 07:59:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:45.288 07:59:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:45.288 07:59:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:45.288 07:59:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:45.288 07:59:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:45.548 07:59:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.548 07:59:30 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:45.548 [2024-07-15 07:59:30.226918] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.548 07:59:30 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:45.807 Malloc0 00:23:45.807 07:59:30 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:46.066 07:59:30 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:46.325 07:59:30 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:46.325 [2024-07-15 07:59:30.966282] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.325 07:59:30 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:46.584 [2024-07-15 07:59:31.138756] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:46.584 07:59:31 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:46.584 [2024-07-15 07:59:31.319351] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:46.844 07:59:31 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3349940 00:23:46.844 07:59:31 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:46.844 07:59:31 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:46.844 07:59:31 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3349940 /var/tmp/bdevperf.sock 00:23:46.844 07:59:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3349940 ']' 00:23:46.844 07:59:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:46.844 07:59:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:46.844 07:59:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:46.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:46.844 07:59:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:46.844 07:59:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:47.781 07:59:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:47.781 07:59:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:47.781 07:59:32 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:48.040 NVMe0n1 00:23:48.040 07:59:32 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:48.299 00:23:48.299 07:59:32 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3350283 00:23:48.299 07:59:32 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:48.299 07:59:32 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:49.266 07:59:33 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:49.524 07:59:34 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:52.813 07:59:37 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:52.813 00:23:52.813 07:59:37 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:53.072 07:59:37 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:56.361 07:59:40 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:56.361 [2024-07-15 07:59:40.818993] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:56.361 07:59:40 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:57.298 07:59:41 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:57.298 [2024-07-15 07:59:42.018143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.298 [2024-07-15 07:59:42.018191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.298 [2024-07-15 07:59:42.018199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.298 [2024-07-15 07:59:42.018205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.298 [2024-07-15 07:59:42.018211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.298 [2024-07-15 07:59:42.018217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.298 [2024-07-15 07:59:42.018223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.298 [2024-07-15 07:59:42.018236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.298 [2024-07-15 07:59:42.018241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.298 [2024-07-15 07:59:42.018247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.298 [2024-07-15 07:59:42.018259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.298 [2024-07-15 07:59:42.018265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.298 [2024-07-15 07:59:42.018271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.298 [2024-07-15 07:59:42.018276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.298 [2024-07-15 07:59:42.018282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.298 [2024-07-15 07:59:42.018288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.298 [2024-07-15 07:59:42.018295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.298 [2024-07-15 07:59:42.018300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.298 [2024-07-15 07:59:42.018306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.298 [2024-07-15 07:59:42.018312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.298 [2024-07-15 07:59:42.018318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018455] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 [2024-07-15 07:59:42.018499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7aa0 is same with the state(5) to be set 00:23:57.299 07:59:42 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3350283 00:24:03.871 0 00:24:03.871 07:59:48 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3349940 00:24:03.871 07:59:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3349940 ']' 00:24:03.871 07:59:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3349940 00:24:03.871 07:59:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:03.871 07:59:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:03.871 07:59:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3349940 00:24:03.871 07:59:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:03.871 07:59:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:03.871 07:59:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3349940' 00:24:03.871 killing process with pid 3349940 00:24:03.871 07:59:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3349940 00:24:03.871 07:59:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3349940 00:24:03.871 07:59:48 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:03.871 [2024-07-15 07:59:31.385308] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:24:03.871 [2024-07-15 07:59:31.385359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3349940 ] 00:24:03.871 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.871 [2024-07-15 07:59:31.454087] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.871 [2024-07-15 07:59:31.529416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.871 Running I/O for 15 seconds... 00:24:03.871 [2024-07-15 07:59:34.111823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.871 [2024-07-15 07:59:34.111870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.871 [2024-07-15 07:59:34.111886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.871 [2024-07-15 07:59:34.111894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.871 [2024-07-15 07:59:34.111903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.871 [2024-07-15 07:59:34.111910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.871 [2024-07-15 07:59:34.111919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.871 [2024-07-15 07:59:34.111926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.871 [2024-07-15 07:59:34.111933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.871 [2024-07-15 07:59:34.111940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.871 [2024-07-15 07:59:34.111948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.871 [2024-07-15 07:59:34.111954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.871 [2024-07-15 07:59:34.111964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.871 [2024-07-15 07:59:34.111970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.871 [2024-07-15 07:59:34.111978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.871 [2024-07-15 07:59:34.111985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.871 [2024-07-15 07:59:34.111993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.871 [2024-07-15 07:59:34.111999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.871 [2024-07-15 07:59:34.112007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.871 [2024-07-15 07:59:34.112014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.871 [2024-07-15 07:59:34.112026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.871 [2024-07-15 07:59:34.112033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.871 [2024-07-15 07:59:34.112048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.871 [2024-07-15 07:59:34.112054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.871 [2024-07-15 07:59:34.112062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.871 [2024-07-15 07:59:34.112070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.871 [2024-07-15 07:59:34.112078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.871 [2024-07-15 07:59:34.112085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.871 [2024-07-15 07:59:34.112094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.871 [2024-07-15 07:59:34.112101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.871 [2024-07-15 07:59:34.112110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.871 [2024-07-15 07:59:34.112117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.871 [2024-07-15 07:59:34.112125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.871 [2024-07-15 07:59:34.112132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.871 [2024-07-15 07:59:34.112141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.871 [2024-07-15 07:59:34.112148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.871 [2024-07-15 07:59:34.112156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.871 [2024-07-15 07:59:34.112163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.871 [2024-07-15 07:59:34.112173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.871 [2024-07-15 07:59:34.112181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.871 [2024-07-15 07:59:34.112190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.871 [2024-07-15 07:59:34.112196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.871 [2024-07-15 07:59:34.112205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.872 [2024-07-15 07:59:34.112844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.872 [2024-07-15 07:59:34.112850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.112858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.112865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.112874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.112881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.112889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.112895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.112906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.112913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.112921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.112927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.112935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.112942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.112950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.112956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.112964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.112971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.112979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.112985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.112994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.873 [2024-07-15 07:59:34.113498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.873 [2024-07-15 07:59:34.113505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.874 [2024-07-15 07:59:34.113519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.874 [2024-07-15 07:59:34.113534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.874 [2024-07-15 07:59:34.113549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.874 [2024-07-15 07:59:34.113563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.874 [2024-07-15 07:59:34.113579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.874 [2024-07-15 07:59:34.113594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.874 [2024-07-15 07:59:34.113609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.874 [2024-07-15 07:59:34.113624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.874 [2024-07-15 07:59:34.113639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.874 [2024-07-15 07:59:34.113654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.874 [2024-07-15 07:59:34.113668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.874 [2024-07-15 07:59:34.113682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.874 [2024-07-15 07:59:34.113697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.874 [2024-07-15 07:59:34.113712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.874 [2024-07-15 07:59:34.113727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.874 [2024-07-15 07:59:34.113742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.874 [2024-07-15 07:59:34.113756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.874 [2024-07-15 07:59:34.113773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.874 [2024-07-15 07:59:34.113787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.874 [2024-07-15 07:59:34.113801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73300 is same with the state(5) to be set 00:24:03.874 [2024-07-15 07:59:34.113819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.874 [2024-07-15 07:59:34.113824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.874 [2024-07-15 07:59:34.113830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94896 len:8 PRP1 0x0 PRP2 0x0 00:24:03.874 [2024-07-15 07:59:34.113836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113877] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e73300 was disconnected and freed. reset controller. 00:24:03.874 [2024-07-15 07:59:34.113888] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:03.874 [2024-07-15 07:59:34.113909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.874 [2024-07-15 07:59:34.113917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.874 [2024-07-15 07:59:34.113931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.874 [2024-07-15 07:59:34.113944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.874 [2024-07-15 07:59:34.113957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:34.113964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:03.874 [2024-07-15 07:59:34.116827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:03.874 [2024-07-15 07:59:34.116856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e55540 (9): Bad file descriptor 00:24:03.874 [2024-07-15 07:59:34.307895] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:03.874 [2024-07-15 07:59:37.626674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.874 [2024-07-15 07:59:37.626718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:37.626734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.874 [2024-07-15 07:59:37.626747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:37.626755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.874 [2024-07-15 07:59:37.626762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:37.626770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.874 [2024-07-15 07:59:37.626776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:37.626785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.874 [2024-07-15 07:59:37.626791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:37.626800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.874 [2024-07-15 07:59:37.626806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:37.626814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.874 [2024-07-15 07:59:37.626821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:37.626830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.874 [2024-07-15 07:59:37.626837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:37.626845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.874 [2024-07-15 07:59:37.626851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:37.626860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.874 [2024-07-15 07:59:37.626866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:37.626876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.874 [2024-07-15 07:59:37.626882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:37.626890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.874 [2024-07-15 07:59:37.626897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:37.626905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.874 [2024-07-15 07:59:37.626911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.874 [2024-07-15 07:59:37.626920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.875 [2024-07-15 07:59:37.626928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.626938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:67504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.626945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.626954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.626961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.626969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.626975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.626984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.626991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:67608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.875 [2024-07-15 07:59:37.627184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:67680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:67688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:67696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:67704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:67736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.875 [2024-07-15 07:59:37.627439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.875 [2024-07-15 07:59:37.627447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:67800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:67872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:67896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:67944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:67960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:67976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:68000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:68008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:68016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:68024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:68032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:68048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.627991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.627999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:68056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.628006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.628014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:68064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.628020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.628028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:68072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.628034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.628042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:68080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.628048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.628056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:68088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.628063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.628071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.876 [2024-07-15 07:59:37.628079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.876 [2024-07-15 07:59:37.628087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:68104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:68120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:68128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:68136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:68144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:68152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:68160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:68168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:68176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:68184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:68192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:68200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:68208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:68216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:68224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:68232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:68240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:68256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.877 [2024-07-15 07:59:37.628394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.877 [2024-07-15 07:59:37.628408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:68264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:68272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:68280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:68288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:68304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:68312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:68320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:68328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:68336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:68344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:68352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:68360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:68368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:68376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.877 [2024-07-15 07:59:37.628635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020380 is same with the state(5) to be set 00:24:03.877 [2024-07-15 07:59:37.628653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.877 [2024-07-15 07:59:37.628659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.877 [2024-07-15 07:59:37.628666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68384 len:8 PRP1 0x0 PRP2 0x0 00:24:03.877 [2024-07-15 07:59:37.628672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628714] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2020380 was disconnected and freed. reset controller. 00:24:03.877 [2024-07-15 07:59:37.628723] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:03.877 [2024-07-15 07:59:37.628747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.877 [2024-07-15 07:59:37.628754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.877 [2024-07-15 07:59:37.628769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.877 [2024-07-15 07:59:37.628783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.877 [2024-07-15 07:59:37.628790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.878 [2024-07-15 07:59:37.628797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:37.628803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:03.878 [2024-07-15 07:59:37.631660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:03.878 [2024-07-15 07:59:37.631689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e55540 (9): Bad file descriptor 00:24:03.878 [2024-07-15 07:59:37.704734] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:03.878 [2024-07-15 07:59:42.019023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.878 [2024-07-15 07:59:42.019664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.878 [2024-07-15 07:59:42.019672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.879 [2024-07-15 07:59:42.019679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.019686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.879 [2024-07-15 07:59:42.019694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.019704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.879 [2024-07-15 07:59:42.019711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.019720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.019726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.019734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.019741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.019749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.019756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.019764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.019770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.019778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.019784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.019792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.019799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.019807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.019814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.019822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.019828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.019836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.019843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.019851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.019857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.019865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.019872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.019879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.019885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.019894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.019901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.019910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.019916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.019924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.019941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.019949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.019955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.019964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.019970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.019978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.019984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.019992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.019998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.020006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.020013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.020021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.020027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.020035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.020041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.020049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.020056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.020064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.020071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.020079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.020086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.020094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.020101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.020109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.020116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.020124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.020130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.020137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.020144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.020151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.020159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.020167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.020175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.020184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.020190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.020197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.020203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.879 [2024-07-15 07:59:42.020211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.879 [2024-07-15 07:59:42.020218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.880 [2024-07-15 07:59:42.020237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.880 [2024-07-15 07:59:42.020251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.880 [2024-07-15 07:59:42.020266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.880 [2024-07-15 07:59:42.020283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.880 [2024-07-15 07:59:42.020297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.880 [2024-07-15 07:59:42.020311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.880 [2024-07-15 07:59:42.020326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.880 [2024-07-15 07:59:42.020341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.880 [2024-07-15 07:59:42.020355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.880 [2024-07-15 07:59:42.020369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.880 [2024-07-15 07:59:42.020383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.880 [2024-07-15 07:59:42.020397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.880 [2024-07-15 07:59:42.020412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:03.880 [2024-07-15 07:59:42.020426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.880 [2024-07-15 07:59:42.020461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82016 len:8 PRP1 0x0 PRP2 0x0 00:24:03.880 [2024-07-15 07:59:42.020467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.880 [2024-07-15 07:59:42.020484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.880 [2024-07-15 07:59:42.020489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82024 len:8 PRP1 0x0 PRP2 0x0 00:24:03.880 [2024-07-15 07:59:42.020496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.880 [2024-07-15 07:59:42.020507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.880 [2024-07-15 07:59:42.020513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82032 len:8 PRP1 0x0 PRP2 0x0 00:24:03.880 [2024-07-15 07:59:42.020519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.880 [2024-07-15 07:59:42.020534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.880 [2024-07-15 07:59:42.020540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82040 len:8 PRP1 0x0 PRP2 0x0 00:24:03.880 [2024-07-15 07:59:42.020546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.880 [2024-07-15 07:59:42.020557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.880 [2024-07-15 07:59:42.020562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82048 len:8 PRP1 0x0 PRP2 0x0 00:24:03.880 [2024-07-15 07:59:42.020568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.880 [2024-07-15 07:59:42.020579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.880 [2024-07-15 07:59:42.020585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82056 len:8 PRP1 0x0 PRP2 0x0 00:24:03.880 [2024-07-15 07:59:42.020592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.880 [2024-07-15 07:59:42.020603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.880 [2024-07-15 07:59:42.020608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82064 len:8 PRP1 0x0 PRP2 0x0 00:24:03.880 [2024-07-15 07:59:42.020614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.880 [2024-07-15 07:59:42.020624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.880 [2024-07-15 07:59:42.020630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82072 len:8 PRP1 0x0 PRP2 0x0 00:24:03.880 [2024-07-15 07:59:42.020637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.880 [2024-07-15 07:59:42.020650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.880 [2024-07-15 07:59:42.020655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82080 len:8 PRP1 0x0 PRP2 0x0 00:24:03.880 [2024-07-15 07:59:42.020661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.880 [2024-07-15 07:59:42.020673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.880 [2024-07-15 07:59:42.020678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82088 len:8 PRP1 0x0 PRP2 0x0 00:24:03.880 [2024-07-15 07:59:42.020685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.880 [2024-07-15 07:59:42.020697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.880 [2024-07-15 07:59:42.020702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82096 len:8 PRP1 0x0 PRP2 0x0 00:24:03.880 [2024-07-15 07:59:42.020708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.880 [2024-07-15 07:59:42.020720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.880 [2024-07-15 07:59:42.020725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82104 len:8 PRP1 0x0 PRP2 0x0 00:24:03.880 [2024-07-15 07:59:42.020731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.880 [2024-07-15 07:59:42.020743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.880 [2024-07-15 07:59:42.020749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82112 len:8 PRP1 0x0 PRP2 0x0 00:24:03.880 [2024-07-15 07:59:42.020755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.880 [2024-07-15 07:59:42.020766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.880 [2024-07-15 07:59:42.020771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82120 len:8 PRP1 0x0 PRP2 0x0 00:24:03.880 [2024-07-15 07:59:42.020777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.880 [2024-07-15 07:59:42.020789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.880 [2024-07-15 07:59:42.020794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82128 len:8 PRP1 0x0 PRP2 0x0 00:24:03.880 [2024-07-15 07:59:42.020801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.880 [2024-07-15 07:59:42.020811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.880 [2024-07-15 07:59:42.020816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82136 len:8 PRP1 0x0 PRP2 0x0 00:24:03.880 [2024-07-15 07:59:42.020822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.880 [2024-07-15 07:59:42.020835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.880 [2024-07-15 07:59:42.020841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82144 len:8 PRP1 0x0 PRP2 0x0 00:24:03.880 [2024-07-15 07:59:42.020849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.880 [2024-07-15 07:59:42.020855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.881 [2024-07-15 07:59:42.020860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.881 [2024-07-15 07:59:42.020865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82152 len:8 PRP1 0x0 PRP2 0x0 00:24:03.881 [2024-07-15 07:59:42.020871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.881 [2024-07-15 07:59:42.020877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.881 [2024-07-15 07:59:42.020882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.881 [2024-07-15 07:59:42.020888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82160 len:8 PRP1 0x0 PRP2 0x0 00:24:03.881 [2024-07-15 07:59:42.020894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.881 [2024-07-15 07:59:42.020901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.881 [2024-07-15 07:59:42.020906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.881 [2024-07-15 07:59:42.020911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82168 len:8 PRP1 0x0 PRP2 0x0 00:24:03.881 [2024-07-15 07:59:42.020917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.881 [2024-07-15 07:59:42.020924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.881 [2024-07-15 07:59:42.020928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.881 [2024-07-15 07:59:42.020934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82176 len:8 PRP1 0x0 PRP2 0x0 00:24:03.881 [2024-07-15 07:59:42.020940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.881 [2024-07-15 07:59:42.020946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.881 [2024-07-15 07:59:42.020951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.881 [2024-07-15 07:59:42.020956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82184 len:8 PRP1 0x0 PRP2 0x0 00:24:03.881 [2024-07-15 07:59:42.020962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.881 [2024-07-15 07:59:42.020969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.881 [2024-07-15 07:59:42.020973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.881 [2024-07-15 07:59:42.020978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82192 len:8 PRP1 0x0 PRP2 0x0 00:24:03.881 [2024-07-15 07:59:42.020984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.881 [2024-07-15 07:59:42.020991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.881 [2024-07-15 07:59:42.020996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.881 [2024-07-15 07:59:42.021001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82200 len:8 PRP1 0x0 PRP2 0x0 00:24:03.881 [2024-07-15 07:59:42.021008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.881 [2024-07-15 07:59:42.021016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.881 [2024-07-15 07:59:42.021020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.881 [2024-07-15 07:59:42.031645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82208 len:8 PRP1 0x0 PRP2 0x0 00:24:03.881 [2024-07-15 07:59:42.031663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.881 [2024-07-15 07:59:42.031674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.881 [2024-07-15 07:59:42.031681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.881 [2024-07-15 07:59:42.031688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82216 len:8 PRP1 0x0 PRP2 0x0 00:24:03.881 [2024-07-15 07:59:42.031697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.881 [2024-07-15 07:59:42.031706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.881 [2024-07-15 07:59:42.031713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.881 [2024-07-15 07:59:42.031720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82224 len:8 PRP1 0x0 PRP2 0x0 00:24:03.881 [2024-07-15 07:59:42.031728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.881 [2024-07-15 07:59:42.031740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.881 [2024-07-15 07:59:42.031746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.881 [2024-07-15 07:59:42.031754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82232 len:8 PRP1 0x0 PRP2 0x0 00:24:03.881 [2024-07-15 07:59:42.031762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.881 [2024-07-15 07:59:42.031771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.881 [2024-07-15 07:59:42.031779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.881 [2024-07-15 07:59:42.031786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82240 len:8 PRP1 0x0 PRP2 0x0 00:24:03.881 [2024-07-15 07:59:42.031794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.881 [2024-07-15 07:59:42.031804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.881 [2024-07-15 07:59:42.031811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.881 [2024-07-15 07:59:42.031818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82248 len:8 PRP1 0x0 PRP2 0x0 00:24:03.881 [2024-07-15 07:59:42.031827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.881 [2024-07-15 07:59:42.031836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.881 [2024-07-15 07:59:42.031843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.881 [2024-07-15 07:59:42.031850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82256 len:8 PRP1 0x0 PRP2 0x0 00:24:03.881 [2024-07-15 07:59:42.031858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.881 [2024-07-15 07:59:42.031867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.881 [2024-07-15 07:59:42.031875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.881 [2024-07-15 07:59:42.031882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82264 len:8 PRP1 0x0 PRP2 0x0 00:24:03.881 [2024-07-15 07:59:42.031890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.881 [2024-07-15 07:59:42.031903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.881 [2024-07-15 07:59:42.031911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.881 [2024-07-15 07:59:42.031919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82272 len:8 PRP1 0x0 PRP2 0x0 00:24:03.881 [2024-07-15 07:59:42.031927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.881 [2024-07-15 07:59:42.031936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.881 [2024-07-15 07:59:42.031944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.881 [2024-07-15 07:59:42.031951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82280 len:8 PRP1 0x0 PRP2 0x0 00:24:03.881 [2024-07-15 07:59:42.031960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.881 [2024-07-15 07:59:42.031969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.881 [2024-07-15 07:59:42.031976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.881 [2024-07-15 07:59:42.031984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82288 len:8 PRP1 0x0 PRP2 0x0 00:24:03.881 [2024-07-15 07:59:42.031992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.881 [2024-07-15 07:59:42.032001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.881 [2024-07-15 07:59:42.032009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.881 [2024-07-15 07:59:42.032017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81616 len:8 PRP1 0x0 PRP2 0x0 00:24:03.881 [2024-07-15 07:59:42.032025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.881 [2024-07-15 07:59:42.032035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:03.881 [2024-07-15 07:59:42.032042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:03.881 [2024-07-15 07:59:42.032050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81624 len:8 PRP1 0x0 PRP2 0x0 00:24:03.881 [2024-07-15 07:59:42.032058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.881 [2024-07-15 07:59:42.032105] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2020170 was disconnected and freed. reset controller. 00:24:03.881 [2024-07-15 07:59:42.032117] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:03.881 [2024-07-15 07:59:42.032143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.881 [2024-07-15 07:59:42.032154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.881 [2024-07-15 07:59:42.032164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.881 [2024-07-15 07:59:42.032174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.881 [2024-07-15 07:59:42.032184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.881 [2024-07-15 07:59:42.032192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.881 [2024-07-15 07:59:42.032202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.881 [2024-07-15 07:59:42.032211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.881 [2024-07-15 07:59:42.032222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:03.881 [2024-07-15 07:59:42.032263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e55540 (9): Bad file descriptor 00:24:03.881 [2024-07-15 07:59:42.036123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:03.881 [2024-07-15 07:59:42.107269] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:03.881 00:24:03.881 Latency(us) 00:24:03.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.881 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:03.881 Verification LBA range: start 0x0 length 0x4000 00:24:03.881 NVMe0n1 : 15.00 10796.58 42.17 1013.46 0.00 10816.13 450.56 21655.37 00:24:03.881 =================================================================================================================== 00:24:03.881 Total : 10796.58 42.17 1013.46 0.00 10816.13 450.56 21655.37 00:24:03.882 Received shutdown signal, test time was about 15.000000 seconds 00:24:03.882 00:24:03.882 Latency(us) 00:24:03.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.882 =================================================================================================================== 00:24:03.882 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:03.882 07:59:48 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:03.882 07:59:48 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:03.882 07:59:48 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:03.882 07:59:48 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3352702 00:24:03.882 07:59:48 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:03.882 07:59:48 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3352702 /var/tmp/bdevperf.sock 00:24:03.882 07:59:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3352702 ']' 00:24:03.882 07:59:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:03.882 07:59:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:03.882 07:59:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:03.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:03.882 07:59:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:03.882 07:59:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:04.451 07:59:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:04.451 07:59:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:04.451 07:59:49 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:04.709 [2024-07-15 07:59:49.346515] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:04.709 07:59:49 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:04.968 [2024-07-15 07:59:49.523026] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:04.968 07:59:49 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:05.227 NVMe0n1 00:24:05.227 07:59:49 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:05.487 00:24:05.487 07:59:50 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:06.055 00:24:06.055 07:59:50 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:06.055 07:59:50 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:06.055 07:59:50 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:06.314 07:59:50 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:09.606 07:59:53 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:09.606 07:59:53 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:09.606 07:59:54 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3353702 00:24:09.606 07:59:54 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:09.606 07:59:54 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3353702 00:24:10.543 0 00:24:10.543 07:59:55 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:10.543 [2024-07-15 07:59:48.385561] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:24:10.543 [2024-07-15 07:59:48.385615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3352702 ] 00:24:10.543 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.543 [2024-07-15 07:59:48.454741] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.543 [2024-07-15 07:59:48.524214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.543 [2024-07-15 07:59:50.886709] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:10.543 [2024-07-15 07:59:50.886758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.543 [2024-07-15 07:59:50.886769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.543 [2024-07-15 07:59:50.886780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.543 [2024-07-15 07:59:50.886787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.543 [2024-07-15 07:59:50.886795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.543 [2024-07-15 07:59:50.886802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.543 [2024-07-15 07:59:50.886809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.543 [2024-07-15 07:59:50.886816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.543 [2024-07-15 07:59:50.886822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.543 [2024-07-15 07:59:50.886848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.543 [2024-07-15 07:59:50.886862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1acc540 (9): Bad file descriptor 00:24:10.543 [2024-07-15 07:59:50.939412] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:10.543 Running I/O for 1 seconds... 00:24:10.543 00:24:10.543 Latency(us) 00:24:10.543 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.543 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:10.543 Verification LBA range: start 0x0 length 0x4000 00:24:10.543 NVMe0n1 : 1.01 11111.70 43.41 0.00 0.00 11468.36 2421.98 9744.92 00:24:10.543 =================================================================================================================== 00:24:10.543 Total : 11111.70 43.41 0.00 0.00 11468.36 2421.98 9744.92 00:24:10.543 07:59:55 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:10.543 07:59:55 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:10.801 07:59:55 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:11.062 07:59:55 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:11.062 07:59:55 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:11.062 07:59:55 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:11.353 07:59:55 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:14.644 07:59:58 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:14.644 07:59:58 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:14.644 07:59:59 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3352702 00:24:14.644 07:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3352702 ']' 00:24:14.644 07:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3352702 00:24:14.644 07:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:14.644 07:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:14.644 07:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3352702 00:24:14.644 07:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:14.644 07:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:14.644 07:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3352702' 00:24:14.644 killing process with pid 3352702 00:24:14.644 07:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3352702 00:24:14.644 07:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3352702 00:24:14.903 07:59:59 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:14.903 07:59:59 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:14.903 07:59:59 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:14.903 07:59:59 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:14.903 07:59:59 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:14.903 07:59:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:14.903 07:59:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:14.903 07:59:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:14.904 07:59:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:14.904 07:59:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:14.904 07:59:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:14.904 rmmod nvme_tcp 00:24:14.904 rmmod nvme_fabrics 00:24:14.904 rmmod nvme_keyring 00:24:14.904 07:59:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:15.163 07:59:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:15.163 07:59:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:15.163 07:59:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3349649 ']' 00:24:15.163 07:59:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3349649 00:24:15.163 07:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3349649 ']' 00:24:15.163 07:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3349649 00:24:15.163 07:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:15.163 07:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:15.163 07:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3349649 00:24:15.163 07:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:15.163 07:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:15.163 07:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3349649' 00:24:15.163 killing process with pid 3349649 00:24:15.163 07:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3349649 00:24:15.163 07:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3349649 00:24:15.163 07:59:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:15.163 07:59:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:15.163 07:59:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:15.163 07:59:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:15.163 07:59:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:15.163 07:59:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.163 07:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:15.163 07:59:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.699 08:00:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:17.699 00:24:17.699 real 0m38.706s 00:24:17.699 user 2m3.999s 00:24:17.699 sys 0m7.636s 00:24:17.699 08:00:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:17.699 08:00:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:17.699 ************************************ 00:24:17.699 END TEST nvmf_failover 00:24:17.699 ************************************ 00:24:17.699 08:00:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:17.699 08:00:02 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:17.699 08:00:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:17.699 08:00:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:17.699 08:00:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:17.699 ************************************ 00:24:17.699 START TEST nvmf_host_discovery 00:24:17.699 ************************************ 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:17.699 * Looking for test storage... 00:24:17.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:24:17.699 08:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.965 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:22.965 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:24:22.965 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:22.965 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:22.965 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:22.965 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:22.965 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:22.965 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:24:22.965 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:22.965 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:24:22.965 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:24:22.965 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:24:22.965 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:24:22.965 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:24:22.965 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:24:22.965 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:22.966 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:22.966 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:22.966 Found net devices under 0000:86:00.0: cvl_0_0 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:22.966 Found net devices under 0000:86:00.1: cvl_0_1 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:22.966 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:23.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:23.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:24:23.225 00:24:23.225 --- 10.0.0.2 ping statistics --- 00:24:23.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.225 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:23.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:23.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:24:23.225 00:24:23.225 --- 10.0.0.1 ping statistics --- 00:24:23.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.225 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3358181 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3358181 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3358181 ']' 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:23.225 08:00:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.225 [2024-07-15 08:00:07.952779] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:24:23.225 [2024-07-15 08:00:07.952829] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.483 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.483 [2024-07-15 08:00:08.026978] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.483 [2024-07-15 08:00:08.105521] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.483 [2024-07-15 08:00:08.105555] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.483 [2024-07-15 08:00:08.105562] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:23.483 [2024-07-15 08:00:08.105569] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:23.483 [2024-07-15 08:00:08.105574] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.483 [2024-07-15 08:00:08.105597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.049 08:00:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:24.049 08:00:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:24:24.049 08:00:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:24.049 08:00:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:24.049 08:00:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.049 08:00:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.049 08:00:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:24.049 08:00:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.049 08:00:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.049 [2024-07-15 08:00:08.796823] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.049 08:00:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.049 08:00:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:24.307 08:00:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.307 08:00:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.307 [2024-07-15 08:00:08.804939] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:24.307 08:00:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.307 08:00:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:24.307 08:00:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.307 08:00:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.307 null0 00:24:24.307 08:00:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.307 08:00:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:24.307 08:00:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.307 08:00:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.307 null1 00:24:24.307 08:00:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.307 08:00:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:24.307 08:00:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.307 08:00:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.307 08:00:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.307 08:00:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3358594 00:24:24.307 08:00:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:24.307 08:00:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3358594 /tmp/host.sock 00:24:24.307 08:00:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3358594 ']' 00:24:24.307 08:00:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:24:24.307 08:00:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:24.307 08:00:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:24.307 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:24.307 08:00:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:24.307 08:00:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.307 [2024-07-15 08:00:08.881142] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:24:24.307 [2024-07-15 08:00:08.881184] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3358594 ] 00:24:24.307 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.307 [2024-07-15 08:00:08.948044] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.307 [2024-07-15 08:00:09.027577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.255 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:25.255 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:24:25.255 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:25.255 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:25.255 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.255 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.255 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.255 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:25.256 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.256 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.256 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.256 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:25.256 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:25.256 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:25.256 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:25.256 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.256 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.256 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:25.256 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:25.256 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.256 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:25.256 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:25.256 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:25.256 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:25.257 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.258 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:25.258 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:25.258 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.258 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.258 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.258 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:25.258 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:25.258 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:25.258 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.258 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:25.258 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.258 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:25.258 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.259 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:25.259 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:25.259 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:25.259 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:25.259 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.259 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:25.259 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.259 08:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:25.259 08:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.521 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:25.521 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:25.521 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.521 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.521 [2024-07-15 08:00:10.036199] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.521 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.521 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:25.521 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:25.521 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:25.521 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.521 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:25.521 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.521 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:25.521 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.521 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:25.521 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:25.521 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:24:25.522 08:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:24:26.088 [2024-07-15 08:00:10.744766] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:26.088 [2024-07-15 08:00:10.744786] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:26.088 [2024-07-15 08:00:10.744798] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:26.346 [2024-07-15 08:00:10.871198] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:26.346 [2024-07-15 08:00:11.088507] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:26.346 [2024-07-15 08:00:11.088525] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:26.603 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.863 [2024-07-15 08:00:11.564558] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:26.863 [2024-07-15 08:00:11.565482] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:26.863 [2024-07-15 08:00:11.565505] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.863 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:26.864 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.864 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:26.864 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.864 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.864 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:26.864 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:26.864 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:26.864 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:26.864 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:26.864 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:26.864 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:26.864 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.864 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:26.864 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.122 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:27.122 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.122 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:27.122 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.122 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:27.122 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:27.122 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:27.122 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:27.122 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:27.122 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:27.122 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:27.122 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:27.122 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:27.122 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:27.122 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.122 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:27.122 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.122 08:00:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:27.122 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.122 [2024-07-15 08:00:11.691878] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:27.122 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:27.122 08:00:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:24:27.381 [2024-07-15 08:00:11.996079] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:27.381 [2024-07-15 08:00:11.996096] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:27.381 [2024-07-15 08:00:11.996101] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.316 [2024-07-15 08:00:12.816659] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:28.316 [2024-07-15 08:00:12.816681] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:28.316 [2024-07-15 08:00:12.820546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.316 [2024-07-15 08:00:12.820564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.316 [2024-07-15 08:00:12.820573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.316 [2024-07-15 08:00:12.820580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.316 [2024-07-15 08:00:12.820587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.316 [2024-07-15 08:00:12.820594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.316 [2024-07-15 08:00:12.820601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.316 [2024-07-15 08:00:12.820608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.316 [2024-07-15 08:00:12.820619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2456f10 is same with the state(5) to be set 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:28.316 [2024-07-15 08:00:12.830559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2456f10 (9): Bad file descriptor 00:24:28.316 [2024-07-15 08:00:12.840597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:28.316 [2024-07-15 08:00:12.840892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-15 08:00:12.840908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2456f10 with addr=10.0.0.2, port=4420 00:24:28.316 [2024-07-15 08:00:12.840916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2456f10 is same with the state(5) to be set 00:24:28.316 [2024-07-15 08:00:12.840929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2456f10 (9): Bad file descriptor 00:24:28.316 [2024-07-15 08:00:12.840945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:28.316 [2024-07-15 08:00:12.840953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:28.316 [2024-07-15 08:00:12.840961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:28.316 [2024-07-15 08:00:12.840971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.316 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.316 [2024-07-15 08:00:12.850652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:28.316 [2024-07-15 08:00:12.850887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-15 08:00:12.850900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2456f10 with addr=10.0.0.2, port=4420 00:24:28.316 [2024-07-15 08:00:12.850909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2456f10 is same with the state(5) to be set 00:24:28.316 [2024-07-15 08:00:12.850920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2456f10 (9): Bad file descriptor 00:24:28.316 [2024-07-15 08:00:12.850931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:28.316 [2024-07-15 08:00:12.850938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:28.316 [2024-07-15 08:00:12.850944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:28.316 [2024-07-15 08:00:12.850954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.316 [2024-07-15 08:00:12.860704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:28.316 [2024-07-15 08:00:12.860842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-15 08:00:12.860855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2456f10 with addr=10.0.0.2, port=4420 00:24:28.316 [2024-07-15 08:00:12.860862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2456f10 is same with the state(5) to be set 00:24:28.316 [2024-07-15 08:00:12.860872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2456f10 (9): Bad file descriptor 00:24:28.316 [2024-07-15 08:00:12.860881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:28.316 [2024-07-15 08:00:12.860887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:28.316 [2024-07-15 08:00:12.860894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:28.316 [2024-07-15 08:00:12.860903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.316 [2024-07-15 08:00:12.870761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:28.316 [2024-07-15 08:00:12.870947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-15 08:00:12.870960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2456f10 with addr=10.0.0.2, port=4420 00:24:28.317 [2024-07-15 08:00:12.870968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2456f10 is same with the state(5) to be set 00:24:28.317 [2024-07-15 08:00:12.870979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2456f10 (9): Bad file descriptor 00:24:28.317 [2024-07-15 08:00:12.870988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:28.317 [2024-07-15 08:00:12.870995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:28.317 [2024-07-15 08:00:12.871002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:28.317 [2024-07-15 08:00:12.871011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.317 [2024-07-15 08:00:12.880812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:28.317 [2024-07-15 08:00:12.880926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-15 08:00:12.880943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2456f10 with addr=10.0.0.2, port=4420 00:24:28.317 [2024-07-15 08:00:12.880950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2456f10 is same with the state(5) to be set 00:24:28.317 [2024-07-15 08:00:12.880960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2456f10 (9): Bad file descriptor 00:24:28.317 [2024-07-15 08:00:12.880970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:28.317 [2024-07-15 08:00:12.880977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:28.317 [2024-07-15 08:00:12.880984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:28.317 [2024-07-15 08:00:12.880993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.317 [2024-07-15 08:00:12.890863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:28.317 [2024-07-15 08:00:12.891041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-15 08:00:12.891054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2456f10 with addr=10.0.0.2, port=4420 00:24:28.317 [2024-07-15 08:00:12.891061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2456f10 is same with the state(5) to be set 00:24:28.317 [2024-07-15 08:00:12.891072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2456f10 (9): Bad file descriptor 00:24:28.317 [2024-07-15 08:00:12.891082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:28.317 [2024-07-15 08:00:12.891089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:28.317 [2024-07-15 08:00:12.891096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:28.317 [2024-07-15 08:00:12.891106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.317 [2024-07-15 08:00:12.900915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:28.317 [2024-07-15 08:00:12.901103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-15 08:00:12.901115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2456f10 with addr=10.0.0.2, port=4420 00:24:28.317 [2024-07-15 08:00:12.901122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2456f10 is same with the state(5) to be set 00:24:28.317 [2024-07-15 08:00:12.901132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2456f10 (9): Bad file descriptor 00:24:28.317 [2024-07-15 08:00:12.901142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:28.317 [2024-07-15 08:00:12.901149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:28.317 [2024-07-15 08:00:12.901155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:28.317 [2024-07-15 08:00:12.901165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.317 [2024-07-15 08:00:12.902542] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:28.317 [2024-07-15 08:00:12.902557] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.317 08:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.317 08:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:28.317 08:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:28.317 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:28.317 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:28.317 08:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:28.317 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.317 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.317 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.317 08:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:28.317 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:28.317 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:28.317 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:28.317 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:28.317 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:28.317 08:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:28.317 08:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:28.317 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.317 08:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:28.317 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.317 08:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:28.317 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.585 08:00:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.519 [2024-07-15 08:00:14.233303] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:29.519 [2024-07-15 08:00:14.233320] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:29.519 [2024-07-15 08:00:14.233332] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:29.777 [2024-07-15 08:00:14.319590] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:29.777 [2024-07-15 08:00:14.420019] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:29.777 [2024-07-15 08:00:14.420044] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:29.777 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.777 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:29.777 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:29.777 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:29.777 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:29.777 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:29.777 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:29.777 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:29.777 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:29.778 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.778 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.778 request: 00:24:29.778 { 00:24:29.778 "name": "nvme", 00:24:29.778 "trtype": "tcp", 00:24:29.778 "traddr": "10.0.0.2", 00:24:29.778 "adrfam": "ipv4", 00:24:29.778 "trsvcid": "8009", 00:24:29.778 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:29.778 "wait_for_attach": true, 00:24:29.778 "method": "bdev_nvme_start_discovery", 00:24:29.778 "req_id": 1 00:24:29.778 } 00:24:29.778 Got JSON-RPC error response 00:24:29.778 response: 00:24:29.778 { 00:24:29.778 "code": -17, 00:24:29.778 "message": "File exists" 00:24:29.778 } 00:24:29.778 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:29.778 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:29.778 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:29.778 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:29.778 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:29.778 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:29.778 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:29.778 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:29.778 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.778 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:29.778 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.778 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:29.778 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.778 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:29.778 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:29.778 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:29.778 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:29.778 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.778 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:29.778 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.778 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:29.778 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.036 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:30.036 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:30.036 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:30.036 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:30.036 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:30.036 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:30.036 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:30.036 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:30.036 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:30.036 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.036 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.036 request: 00:24:30.036 { 00:24:30.036 "name": "nvme_second", 00:24:30.036 "trtype": "tcp", 00:24:30.036 "traddr": "10.0.0.2", 00:24:30.036 "adrfam": "ipv4", 00:24:30.036 "trsvcid": "8009", 00:24:30.036 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:30.036 "wait_for_attach": true, 00:24:30.036 "method": "bdev_nvme_start_discovery", 00:24:30.036 "req_id": 1 00:24:30.036 } 00:24:30.036 Got JSON-RPC error response 00:24:30.036 response: 00:24:30.036 { 00:24:30.037 "code": -17, 00:24:30.037 "message": "File exists" 00:24:30.037 } 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.037 08:00:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.969 [2024-07-15 08:00:15.668310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.969 [2024-07-15 08:00:15.668338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2493a00 with addr=10.0.0.2, port=8010 00:24:30.969 [2024-07-15 08:00:15.668351] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:30.969 [2024-07-15 08:00:15.668357] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:30.969 [2024-07-15 08:00:15.668363] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:31.933 [2024-07-15 08:00:16.670818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.933 [2024-07-15 08:00:16.670844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2493a00 with addr=10.0.0.2, port=8010 00:24:31.933 [2024-07-15 08:00:16.670855] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:31.933 [2024-07-15 08:00:16.670861] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:31.933 [2024-07-15 08:00:16.670867] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:33.309 [2024-07-15 08:00:17.672984] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:33.309 request: 00:24:33.309 { 00:24:33.309 "name": "nvme_second", 00:24:33.309 "trtype": "tcp", 00:24:33.309 "traddr": "10.0.0.2", 00:24:33.309 "adrfam": "ipv4", 00:24:33.309 "trsvcid": "8010", 00:24:33.309 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:33.309 "wait_for_attach": false, 00:24:33.309 "attach_timeout_ms": 3000, 00:24:33.309 "method": "bdev_nvme_start_discovery", 00:24:33.309 "req_id": 1 00:24:33.309 } 00:24:33.309 Got JSON-RPC error response 00:24:33.309 response: 00:24:33.309 { 00:24:33.309 "code": -110, 00:24:33.309 "message": "Connection timed out" 00:24:33.309 } 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3358594 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:33.309 rmmod nvme_tcp 00:24:33.309 rmmod nvme_fabrics 00:24:33.309 rmmod nvme_keyring 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3358181 ']' 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3358181 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 3358181 ']' 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 3358181 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3358181 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3358181' 00:24:33.309 killing process with pid 3358181 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 3358181 00:24:33.309 08:00:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 3358181 00:24:33.309 08:00:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:33.309 08:00:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:33.309 08:00:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:33.309 08:00:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:33.309 08:00:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:33.309 08:00:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.309 08:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:33.309 08:00:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:35.842 00:24:35.842 real 0m18.036s 00:24:35.842 user 0m22.336s 00:24:35.842 sys 0m5.707s 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.842 ************************************ 00:24:35.842 END TEST nvmf_host_discovery 00:24:35.842 ************************************ 00:24:35.842 08:00:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:35.842 08:00:20 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:35.842 08:00:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:35.842 08:00:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:35.842 08:00:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:35.842 ************************************ 00:24:35.842 START TEST nvmf_host_multipath_status 00:24:35.842 ************************************ 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:35.842 * Looking for test storage... 00:24:35.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:35.842 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:35.843 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:24:35.843 08:00:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:41.109 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:41.109 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:24:41.109 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:41.109 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:41.109 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:41.109 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:41.109 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:41.109 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:24:41.109 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:41.109 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:24:41.109 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:24:41.109 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:41.110 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:41.110 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:41.110 Found net devices under 0000:86:00.0: cvl_0_0 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:41.110 Found net devices under 0000:86:00.1: cvl_0_1 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:41.110 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:41.369 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:41.369 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:41.369 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:41.369 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:41.369 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:41.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:41.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:24:41.369 00:24:41.369 --- 10.0.0.2 ping statistics --- 00:24:41.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.369 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:24:41.369 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:41.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:41.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:24:41.369 00:24:41.369 --- 10.0.0.1 ping statistics --- 00:24:41.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.369 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:24:41.369 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:41.369 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:24:41.369 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:41.369 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:41.369 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:41.369 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:41.369 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:41.369 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:41.369 08:00:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:41.369 08:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:41.369 08:00:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:41.369 08:00:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:41.369 08:00:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:41.369 08:00:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3363891 00:24:41.369 08:00:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3363891 00:24:41.369 08:00:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:41.369 08:00:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3363891 ']' 00:24:41.369 08:00:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.369 08:00:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:41.369 08:00:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.369 08:00:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:41.369 08:00:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:41.369 [2024-07-15 08:00:26.067298] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:24:41.369 [2024-07-15 08:00:26.067345] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:41.369 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.627 [2024-07-15 08:00:26.138745] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:41.627 [2024-07-15 08:00:26.211389] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:41.627 [2024-07-15 08:00:26.211424] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:41.627 [2024-07-15 08:00:26.211431] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:41.627 [2024-07-15 08:00:26.211437] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:41.627 [2024-07-15 08:00:26.211442] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:41.627 [2024-07-15 08:00:26.211527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.627 [2024-07-15 08:00:26.211527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.192 08:00:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:42.192 08:00:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:42.192 08:00:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:42.192 08:00:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:42.192 08:00:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:42.192 08:00:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.192 08:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3363891 00:24:42.192 08:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:42.450 [2024-07-15 08:00:27.068231] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.450 08:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:42.709 Malloc0 00:24:42.709 08:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:42.709 08:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:42.967 08:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:43.226 [2024-07-15 08:00:27.777888] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.226 08:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:43.226 [2024-07-15 08:00:27.942353] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:43.226 08:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3364155 00:24:43.226 08:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:43.226 08:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:43.226 08:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3364155 /var/tmp/bdevperf.sock 00:24:43.226 08:00:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3364155 ']' 00:24:43.226 08:00:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:43.226 08:00:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:43.226 08:00:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:43.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:43.226 08:00:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:43.226 08:00:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:44.162 08:00:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:44.162 08:00:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:44.162 08:00:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:44.420 08:00:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:44.679 Nvme0n1 00:24:44.679 08:00:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:45.247 Nvme0n1 00:24:45.247 08:00:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:45.247 08:00:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:47.148 08:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:47.148 08:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:47.407 08:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:47.407 08:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:48.782 08:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:48.782 08:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:48.782 08:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.782 08:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:48.782 08:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.782 08:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:48.782 08:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.782 08:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:48.782 08:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:48.782 08:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:48.782 08:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.782 08:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:49.039 08:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.039 08:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:49.039 08:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.039 08:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:49.297 08:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.297 08:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:49.298 08:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:49.298 08:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.556 08:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.556 08:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:49.556 08:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.556 08:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:49.556 08:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.556 08:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:49.556 08:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:49.814 08:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:50.073 08:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:51.072 08:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:51.072 08:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:51.072 08:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.072 08:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:51.347 08:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:51.347 08:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:51.347 08:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.347 08:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:51.347 08:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.347 08:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:51.347 08:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.347 08:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:51.605 08:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.605 08:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:51.605 08:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.605 08:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:51.864 08:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.864 08:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:51.864 08:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:51.864 08:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.864 08:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.864 08:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:52.123 08:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.123 08:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:52.123 08:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.123 08:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:52.123 08:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:52.382 08:00:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:52.641 08:00:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:53.576 08:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:53.576 08:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:53.576 08:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.576 08:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:53.835 08:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.835 08:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:53.835 08:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.835 08:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:54.094 08:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:54.094 08:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:54.094 08:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.094 08:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:54.094 08:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.094 08:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:54.094 08:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.094 08:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:54.353 08:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.353 08:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:54.353 08:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.353 08:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:54.613 08:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.613 08:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:54.613 08:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.613 08:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:54.871 08:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.871 08:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:54.872 08:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:54.872 08:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:55.130 08:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:56.067 08:00:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:56.067 08:00:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:56.067 08:00:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.067 08:00:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:56.325 08:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.325 08:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:56.325 08:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:56.325 08:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.584 08:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:56.584 08:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:56.584 08:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.584 08:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:56.842 08:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.842 08:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:56.843 08:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.843 08:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:56.843 08:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.843 08:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:56.843 08:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.843 08:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:57.101 08:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.101 08:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:57.101 08:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.101 08:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:57.360 08:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:57.360 08:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:57.360 08:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:57.618 08:00:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:57.876 08:00:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:58.812 08:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:58.812 08:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:58.812 08:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.812 08:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:58.812 08:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:58.812 08:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:59.070 08:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.070 08:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:59.070 08:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:59.070 08:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:59.070 08:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.070 08:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:59.328 08:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.328 08:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:59.328 08:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.328 08:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:59.586 08:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.586 08:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:59.586 08:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.586 08:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:59.586 08:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:59.586 08:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:59.586 08:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:59.586 08:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.843 08:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:59.843 08:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:59.843 08:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:00.100 08:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:00.100 08:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:01.475 08:00:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:01.475 08:00:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:01.475 08:00:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.475 08:00:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:01.475 08:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:01.475 08:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:01.475 08:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:01.475 08:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.734 08:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.734 08:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:01.734 08:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.734 08:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:01.734 08:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.734 08:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:01.734 08:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.734 08:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:01.992 08:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.992 08:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:01.992 08:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.992 08:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:02.251 08:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:02.251 08:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:02.251 08:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.251 08:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:02.251 08:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.251 08:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:02.509 08:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:02.509 08:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:02.768 08:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:03.026 08:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:03.960 08:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:03.960 08:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:03.960 08:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.960 08:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:04.219 08:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.219 08:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:04.219 08:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.219 08:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:04.477 08:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.477 08:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:04.477 08:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.477 08:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:04.477 08:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.477 08:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:04.477 08:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.477 08:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:04.736 08:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.736 08:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:04.736 08:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.736 08:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:04.994 08:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.994 08:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:04.994 08:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.994 08:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:05.253 08:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.253 08:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:05.253 08:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:05.511 08:00:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:05.511 08:00:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:06.505 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:06.505 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:06.505 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.505 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:06.763 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:06.763 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:06.763 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.763 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:07.021 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.021 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:07.021 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.021 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:07.279 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.279 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:07.279 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.279 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:07.279 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.279 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:07.279 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.279 08:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:07.538 08:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.538 08:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:07.538 08:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.538 08:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:07.796 08:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.796 08:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:07.796 08:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:08.055 08:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:08.055 08:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:08.993 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:08.993 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:08.993 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.993 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:09.252 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.252 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:09.252 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.252 08:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:09.510 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.510 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:09.510 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.510 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:09.769 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.769 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:09.769 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.769 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:09.769 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.769 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:09.769 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.769 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:10.026 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.026 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:10.026 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:10.026 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.284 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.284 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:10.284 08:00:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:10.542 08:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:10.801 08:00:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:11.737 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:11.737 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:11.737 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.737 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:11.995 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.995 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:11.995 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.995 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:11.995 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:11.995 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:11.995 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.995 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:12.254 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.254 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:12.254 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.254 08:00:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:12.513 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.513 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:12.513 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.513 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:12.514 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.514 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:12.514 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.514 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:12.772 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:12.772 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3364155 00:25:12.772 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3364155 ']' 00:25:12.772 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3364155 00:25:12.772 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:25:12.772 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:12.772 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3364155 00:25:12.772 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:12.772 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:12.772 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3364155' 00:25:12.772 killing process with pid 3364155 00:25:12.772 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3364155 00:25:12.773 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3364155 00:25:13.057 Connection closed with partial response: 00:25:13.057 00:25:13.057 00:25:13.057 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3364155 00:25:13.057 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:13.057 [2024-07-15 08:00:28.002723] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:25:13.057 [2024-07-15 08:00:28.002776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3364155 ] 00:25:13.057 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.057 [2024-07-15 08:00:28.069692] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.057 [2024-07-15 08:00:28.146774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:13.057 Running I/O for 90 seconds... 00:25:13.057 [2024-07-15 08:00:42.152208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.057 [2024-07-15 08:00:42.152250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:13.057 [2024-07-15 08:00:42.152289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.057 [2024-07-15 08:00:42.152297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:13.057 [2024-07-15 08:00:42.152311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.057 [2024-07-15 08:00:42.152319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:13.057 [2024-07-15 08:00:42.152332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.057 [2024-07-15 08:00:42.152339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:13.057 [2024-07-15 08:00:42.152351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.057 [2024-07-15 08:00:42.152358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:13.057 [2024-07-15 08:00:42.152371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.057 [2024-07-15 08:00:42.152377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:13.057 [2024-07-15 08:00:42.152390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.057 [2024-07-15 08:00:42.152396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:13.057 [2024-07-15 08:00:42.152409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.057 [2024-07-15 08:00:42.152416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:13.057 [2024-07-15 08:00:42.152428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.057 [2024-07-15 08:00:42.152434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:13.057 [2024-07-15 08:00:42.152447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.057 [2024-07-15 08:00:42.152454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:13.057 [2024-07-15 08:00:42.152466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.057 [2024-07-15 08:00:42.152478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:13.057 [2024-07-15 08:00:42.152491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.057 [2024-07-15 08:00:42.152498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:13.057 [2024-07-15 08:00:42.152510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.057 [2024-07-15 08:00:42.152516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:13.057 [2024-07-15 08:00:42.152529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.057 [2024-07-15 08:00:42.152536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:13.057 [2024-07-15 08:00:42.152549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.057 [2024-07-15 08:00:42.152555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:13.057 [2024-07-15 08:00:42.152568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.057 [2024-07-15 08:00:42.152575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:13.057 [2024-07-15 08:00:42.152587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.057 [2024-07-15 08:00:42.152594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.057 [2024-07-15 08:00:42.152607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.057 [2024-07-15 08:00:42.152613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:13.057 [2024-07-15 08:00:42.152625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.057 [2024-07-15 08:00:42.152632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:13.057 [2024-07-15 08:00:42.152645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.057 [2024-07-15 08:00:42.152652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:13.057 [2024-07-15 08:00:42.152664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.057 [2024-07-15 08:00:42.152671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:13.057 [2024-07-15 08:00:42.152683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.057 [2024-07-15 08:00:42.152690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:13.057 [2024-07-15 08:00:42.152701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.057 [2024-07-15 08:00:42.152708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:13.057 [2024-07-15 08:00:42.152722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.057 [2024-07-15 08:00:42.152729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:13.057 [2024-07-15 08:00:42.152992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.153603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.058 [2024-07-15 08:00:42.153609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:13.058 [2024-07-15 08:00:42.154189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.059 [2024-07-15 08:00:42.154201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.059 [2024-07-15 08:00:42.154222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.059 [2024-07-15 08:00:42.154386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.059 [2024-07-15 08:00:42.154973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:13.059 [2024-07-15 08:00:42.154986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.154992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:13.060 [2024-07-15 08:00:42.155597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.060 [2024-07-15 08:00:42.155605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.155949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.061 [2024-07-15 08:00:42.155959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.155972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.155980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.155992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.155999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.061 [2024-07-15 08:00:42.156100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.061 [2024-07-15 08:00:42.156587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:13.061 [2024-07-15 08:00:42.156599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.062 [2024-07-15 08:00:42.156606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:13.062 [2024-07-15 08:00:42.156618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.062 [2024-07-15 08:00:42.156625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:13.062 [2024-07-15 08:00:42.156637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.062 [2024-07-15 08:00:42.156643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:13.062 [2024-07-15 08:00:42.156655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.062 [2024-07-15 08:00:42.156664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:13.062 [2024-07-15 08:00:42.156677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.062 [2024-07-15 08:00:42.156684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:13.062 [2024-07-15 08:00:42.156696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.062 [2024-07-15 08:00:42.156703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:13.062 [2024-07-15 08:00:42.156715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.062 [2024-07-15 08:00:42.156723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.062 [2024-07-15 08:00:42.156735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.062 [2024-07-15 08:00:42.156742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:13.062 [2024-07-15 08:00:42.157102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.062 [2024-07-15 08:00:42.157113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:13.062 [2024-07-15 08:00:42.157127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.062 [2024-07-15 08:00:42.157134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:13.062 [2024-07-15 08:00:42.157147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.062 [2024-07-15 08:00:42.157153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:13.062 [2024-07-15 08:00:42.157166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.062 [2024-07-15 08:00:42.157173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:13.062 [2024-07-15 08:00:42.157185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.062 [2024-07-15 08:00:42.157192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:13.062 [2024-07-15 08:00:42.157204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.062 [2024-07-15 08:00:42.157211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:13.062 [2024-07-15 08:00:42.157223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.062 [2024-07-15 08:00:42.157235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:13.062 [2024-07-15 08:00:42.157248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.062 [2024-07-15 08:00:42.157257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:13.062 [2024-07-15 08:00:42.157269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.062 [2024-07-15 08:00:42.157276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:13.062 [2024-07-15 08:00:42.157288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.062 [2024-07-15 08:00:42.157295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:13.062 [2024-07-15 08:00:42.157308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.062 [2024-07-15 08:00:42.157314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:13.062 [2024-07-15 08:00:42.157326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.062 [2024-07-15 08:00:42.157333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:13.062 [2024-07-15 08:00:42.157347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.062 [2024-07-15 08:00:42.157354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:13.062 [2024-07-15 08:00:42.157366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.062 [2024-07-15 08:00:42.157372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:13.062 [2024-07-15 08:00:42.157385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.062 [2024-07-15 08:00:42.157393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.062 [2024-07-15 08:00:42.157405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.062 [2024-07-15 08:00:42.157412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.062 [2024-07-15 08:00:42.157424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.062 [2024-07-15 08:00:42.157430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:13.062 [2024-07-15 08:00:42.157442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.063 [2024-07-15 08:00:42.157449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.157461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.063 [2024-07-15 08:00:42.157468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.157480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.063 [2024-07-15 08:00:42.157486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.168923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.063 [2024-07-15 08:00:42.168933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.168946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.063 [2024-07-15 08:00:42.168953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.168965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.063 [2024-07-15 08:00:42.168972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.168985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.063 [2024-07-15 08:00:42.168991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.063 [2024-07-15 08:00:42.169491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:13.063 [2024-07-15 08:00:42.169982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.063 [2024-07-15 08:00:42.169991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.064 [2024-07-15 08:00:42.170774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:13.064 [2024-07-15 08:00:42.170792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.065 [2024-07-15 08:00:42.170801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.170818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.065 [2024-07-15 08:00:42.170827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.170844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.065 [2024-07-15 08:00:42.170853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.170870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.065 [2024-07-15 08:00:42.170879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.170896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.065 [2024-07-15 08:00:42.170904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.170921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.065 [2024-07-15 08:00:42.170930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.170947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.065 [2024-07-15 08:00:42.170956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.170972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.170981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.170998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.171006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.171032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.171058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.171083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.171111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.065 [2024-07-15 08:00:42.171137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.171163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.171189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.171215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.171245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.171270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.171296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.171322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.171348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.171373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.171399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.171427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.171453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.171478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.171504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.171530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.171555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.171581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.171606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.171633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.065 [2024-07-15 08:00:42.171658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:13.065 [2024-07-15 08:00:42.171675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.171684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.171701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.171710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.171726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.171737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.171754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.171763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.171780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.171789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.171806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.171815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.171832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.171841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.171858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.171867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.171883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.171892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.171909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.171918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.171934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.171943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.171960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.171969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.172947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.172964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.172983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.172992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.173009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.173018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.173038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.173048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.173064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.173074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.173091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.173100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.173117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.173125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.173143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.173152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.173169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.173178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.173195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.173204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.173221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.173235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.173253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.173262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.173279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.173288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.173305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.173314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.173331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.173340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.173359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.173369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.173386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.173394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.173411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.173420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.173437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.173446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.173463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.173472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.173488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.173498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.173514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.066 [2024-07-15 08:00:42.173523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:13.066 [2024-07-15 08:00:42.173540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.067 [2024-07-15 08:00:42.173549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:13.067 [2024-07-15 08:00:42.173566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.067 [2024-07-15 08:00:42.173576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:13.067 [2024-07-15 08:00:42.173592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.067 [2024-07-15 08:00:42.173601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.067 [2024-07-15 08:00:42.173618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.067 [2024-07-15 08:00:42.173627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.067 [2024-07-15 08:00:42.173644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.067 [2024-07-15 08:00:42.173654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:13.067 [2024-07-15 08:00:42.173670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.067 [2024-07-15 08:00:42.173684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:13.067 [2024-07-15 08:00:42.173701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.067 [2024-07-15 08:00:42.173711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:13.067 [2024-07-15 08:00:42.173728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.067 [2024-07-15 08:00:42.173736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:13.067 [2024-07-15 08:00:42.173754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.067 [2024-07-15 08:00:42.173763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:13.067 [2024-07-15 08:00:42.174137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.067 [2024-07-15 08:00:42.174150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:13.067 [2024-07-15 08:00:42.174169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.067 [2024-07-15 08:00:42.174178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:13.067 [2024-07-15 08:00:42.174195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.067 [2024-07-15 08:00:42.174205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:13.067 [2024-07-15 08:00:42.174221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.067 [2024-07-15 08:00:42.174237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:13.067 [2024-07-15 08:00:42.174254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.067 [2024-07-15 08:00:42.174263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:13.067 [2024-07-15 08:00:42.174279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.067 [2024-07-15 08:00:42.174289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:13.067 [2024-07-15 08:00:42.174305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.067 [2024-07-15 08:00:42.174314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:13.067 [2024-07-15 08:00:42.174332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.067 [2024-07-15 08:00:42.174341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:13.067 [2024-07-15 08:00:42.174357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.067 [2024-07-15 08:00:42.174369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:13.067 [2024-07-15 08:00:42.174385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.067 [2024-07-15 08:00:42.174395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:13.067 [2024-07-15 08:00:42.174412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.067 [2024-07-15 08:00:42.174421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:13.067 [2024-07-15 08:00:42.174438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.067 [2024-07-15 08:00:42.174447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:13.067 [2024-07-15 08:00:42.174464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.067 [2024-07-15 08:00:42.174473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:13.067 [2024-07-15 08:00:42.174489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.067 [2024-07-15 08:00:42.174499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:13.067 [2024-07-15 08:00:42.174515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.067 [2024-07-15 08:00:42.174524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:13.067 [2024-07-15 08:00:42.174542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.067 [2024-07-15 08:00:42.174551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.174570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.174579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.174596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.174605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.174622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.174631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.174648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.174656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.174673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.174682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.174702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.174711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.174728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.174737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.174754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.174763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.174780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.174789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.174806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.174815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.174831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.174840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.174857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.174866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.174883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.174892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.174909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.174918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.174935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.174944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.174961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.174970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.174988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.174997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.175016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.175025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.175042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.175051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.175068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.175077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.175094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.175104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.175122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.175131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.175148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.175158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.175175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.175184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.175201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.175210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.175232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.175242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.175258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.175268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.175285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.175294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.175311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.175320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.175337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.175348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.175365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.175374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.175391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.175400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.175418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.068 [2024-07-15 08:00:42.175427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.068 [2024-07-15 08:00:42.175444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.069 [2024-07-15 08:00:42.175453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.175470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.069 [2024-07-15 08:00:42.175479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.175496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.069 [2024-07-15 08:00:42.175505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.175522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.069 [2024-07-15 08:00:42.175531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.175548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.069 [2024-07-15 08:00:42.175557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.175574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.069 [2024-07-15 08:00:42.175583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.175600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.069 [2024-07-15 08:00:42.175609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.175626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.069 [2024-07-15 08:00:42.175635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.175652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.069 [2024-07-15 08:00:42.175662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.175680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.069 [2024-07-15 08:00:42.175688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.175706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.069 [2024-07-15 08:00:42.175715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.176285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.069 [2024-07-15 08:00:42.176299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.176318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.069 [2024-07-15 08:00:42.176327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.176344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.069 [2024-07-15 08:00:42.176353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.176370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.069 [2024-07-15 08:00:42.176380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.176397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.069 [2024-07-15 08:00:42.176406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.176424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.069 [2024-07-15 08:00:42.176433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.176450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.069 [2024-07-15 08:00:42.176459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.176475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.069 [2024-07-15 08:00:42.176485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.176501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.069 [2024-07-15 08:00:42.176510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.176527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.069 [2024-07-15 08:00:42.176536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.176556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.069 [2024-07-15 08:00:42.176565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.176582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.069 [2024-07-15 08:00:42.176591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.176607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.069 [2024-07-15 08:00:42.176616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.176633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.069 [2024-07-15 08:00:42.176642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.176659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.069 [2024-07-15 08:00:42.176669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.176685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.069 [2024-07-15 08:00:42.176694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.176711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.069 [2024-07-15 08:00:42.176720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.176737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.069 [2024-07-15 08:00:42.176745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.176762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.069 [2024-07-15 08:00:42.176771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.176787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.069 [2024-07-15 08:00:42.176796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.176813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.069 [2024-07-15 08:00:42.176822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.182455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.069 [2024-07-15 08:00:42.182468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.182486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.069 [2024-07-15 08:00:42.182494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:13.069 [2024-07-15 08:00:42.182509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.069 [2024-07-15 08:00:42.182517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.182532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.182540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.182555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.182563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.182578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.182586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.182601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.182609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.182624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.182632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.182647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.182655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.182670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.182678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.182693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.182701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.182715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.182724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.182738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.182746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.182761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.182772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.182788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.182797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.182813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.182821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.182837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.182845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.182860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.182868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.182883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.182892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.182907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.182915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.182930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.182938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.182953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.182962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.182977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.182986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.183002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.183011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.183027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.183035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.183052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.183062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.183077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.183087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.183101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.183111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.183126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.183134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.183151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.183159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.183174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.183183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.183198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.183206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.183220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.183232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.183247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.070 [2024-07-15 08:00:42.183255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:13.070 [2024-07-15 08:00:42.183270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.071 [2024-07-15 08:00:42.183278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.183293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.071 [2024-07-15 08:00:42.183301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.183315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.071 [2024-07-15 08:00:42.183324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.183339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.071 [2024-07-15 08:00:42.183347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.183363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.071 [2024-07-15 08:00:42.183371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.183386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.071 [2024-07-15 08:00:42.183394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.183409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.071 [2024-07-15 08:00:42.183417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.183433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.071 [2024-07-15 08:00:42.183441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.183456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.071 [2024-07-15 08:00:42.183464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.183479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.071 [2024-07-15 08:00:42.183487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.183502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.071 [2024-07-15 08:00:42.183511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.183527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.071 [2024-07-15 08:00:42.183535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.184268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.071 [2024-07-15 08:00:42.184286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.184303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.071 [2024-07-15 08:00:42.184312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.184327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.071 [2024-07-15 08:00:42.184336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.184351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.071 [2024-07-15 08:00:42.184359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.184378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.071 [2024-07-15 08:00:42.184386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.184401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.071 [2024-07-15 08:00:42.184409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.184424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.071 [2024-07-15 08:00:42.184432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.184447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.071 [2024-07-15 08:00:42.184455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.184470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.071 [2024-07-15 08:00:42.184478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.184493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.071 [2024-07-15 08:00:42.184501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.184516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.071 [2024-07-15 08:00:42.184524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.184540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.071 [2024-07-15 08:00:42.184548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.184563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.071 [2024-07-15 08:00:42.184571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.184586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.071 [2024-07-15 08:00:42.184594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.184609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.071 [2024-07-15 08:00:42.184617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.184632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.071 [2024-07-15 08:00:42.184640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.184655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.071 [2024-07-15 08:00:42.184665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.184681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.071 [2024-07-15 08:00:42.184689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.184704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.071 [2024-07-15 08:00:42.184712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.184727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.071 [2024-07-15 08:00:42.184735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:13.071 [2024-07-15 08:00:42.184750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.184758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.184773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.184781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.184796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.184804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.184819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.184827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.184842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.184851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.184866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.184874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.184889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.184897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.184912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.184920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.184935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.184946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.184961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.184969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.184984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.184993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.185008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.185016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.185031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.185041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.185055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.185064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.185079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.185087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.185102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.185110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.185125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.185134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.185149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.185157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.185172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.185180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.185195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.185203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.185219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.185236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.185252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.185260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.185275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.185283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.185298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.185306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.185321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.185330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.185344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.185353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.185368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.185376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.185391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.185399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.185415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.185424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.185439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.185447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.185462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.185470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.185485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.185494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:13.072 [2024-07-15 08:00:42.185509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.072 [2024-07-15 08:00:42.185517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:13.073 [2024-07-15 08:00:42.185536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.073 [2024-07-15 08:00:42.185544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:13.073 [2024-07-15 08:00:42.185559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.073 [2024-07-15 08:00:42.185567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:13.073 [2024-07-15 08:00:42.185583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.073 [2024-07-15 08:00:42.185592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:13.073 [2024-07-15 08:00:42.185608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.073 [2024-07-15 08:00:42.185617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:13.073 [2024-07-15 08:00:42.185634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.073 [2024-07-15 08:00:42.185643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:13.073 [2024-07-15 08:00:42.185658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.073 [2024-07-15 08:00:42.185666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:13.073 [2024-07-15 08:00:42.185681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.073 [2024-07-15 08:00:42.185690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:13.073 [2024-07-15 08:00:42.185705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.073 [2024-07-15 08:00:42.185713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:13.073 [2024-07-15 08:00:42.185728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.073 [2024-07-15 08:00:42.185736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:13.073 [2024-07-15 08:00:42.185751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.073 [2024-07-15 08:00:42.185759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:13.073 [2024-07-15 08:00:42.185774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.073 [2024-07-15 08:00:42.185783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:13.073 [2024-07-15 08:00:42.185798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.073 [2024-07-15 08:00:42.185808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:13.073 [2024-07-15 08:00:42.185824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.073 [2024-07-15 08:00:42.185833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:13.073 [2024-07-15 08:00:42.185848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.073 [2024-07-15 08:00:42.185856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:13.073 [2024-07-15 08:00:42.185871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.073 [2024-07-15 08:00:42.185880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:13.073 [2024-07-15 08:00:42.186471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.073 [2024-07-15 08:00:42.186484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:13.073 [2024-07-15 08:00:42.186500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.073 [2024-07-15 08:00:42.186508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:13.073 [2024-07-15 08:00:42.186523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.073 [2024-07-15 08:00:42.186532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:13.073 [2024-07-15 08:00:42.186547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.073 [2024-07-15 08:00:42.186555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:13.073 [2024-07-15 08:00:42.186571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.073 [2024-07-15 08:00:42.186579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:13.073 [2024-07-15 08:00:42.186594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.073 [2024-07-15 08:00:42.186602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:13.073 [2024-07-15 08:00:42.186617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.073 [2024-07-15 08:00:42.186626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:13.073 [2024-07-15 08:00:42.186641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.073 [2024-07-15 08:00:42.186649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:13.073 [2024-07-15 08:00:42.186664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.073 [2024-07-15 08:00:42.186672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:13.073 [2024-07-15 08:00:42.186687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.073 [2024-07-15 08:00:42.186698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.186713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.186721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.186736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.186744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.186759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.186767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.186782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.186790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.186805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.186813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.186828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.186836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.186851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.186859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.186874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.186882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.186897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.186905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.186920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.186928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.186943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.186951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.186966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.186976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.186991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.186999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.187014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.187022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.187038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.187046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.187061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.187069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.187084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.187092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.187107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.187115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.187130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.187138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.187153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.187163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.187178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.187186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.187201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.187209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.187228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.187237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.187252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.187261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.187277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.187286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.187301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.187309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.187324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.187332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.187348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.187356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.187371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.187379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.187395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.187403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.187418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.187426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.187441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.187450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:13.074 [2024-07-15 08:00:42.187464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.074 [2024-07-15 08:00:42.187473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.187488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.075 [2024-07-15 08:00:42.187496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.187512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.075 [2024-07-15 08:00:42.187520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.187535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.075 [2024-07-15 08:00:42.187543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.187560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.075 [2024-07-15 08:00:42.187568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.187583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.075 [2024-07-15 08:00:42.187591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.187606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.075 [2024-07-15 08:00:42.187615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.187629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.075 [2024-07-15 08:00:42.187638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.187653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.075 [2024-07-15 08:00:42.187661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.187676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.075 [2024-07-15 08:00:42.187684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.187699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.075 [2024-07-15 08:00:42.187707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.187722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.075 [2024-07-15 08:00:42.187731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.187746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.075 [2024-07-15 08:00:42.187754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.187769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.075 [2024-07-15 08:00:42.187777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.187792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.075 [2024-07-15 08:00:42.187801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.187816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.075 [2024-07-15 08:00:42.187824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.187840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.075 [2024-07-15 08:00:42.187849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.188479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.075 [2024-07-15 08:00:42.188495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.188514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.075 [2024-07-15 08:00:42.188522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.188538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.075 [2024-07-15 08:00:42.188546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.188561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.075 [2024-07-15 08:00:42.188570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.188585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.075 [2024-07-15 08:00:42.188593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.188608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.075 [2024-07-15 08:00:42.188616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.188631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.075 [2024-07-15 08:00:42.188639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.188654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.075 [2024-07-15 08:00:42.188662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.188677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.075 [2024-07-15 08:00:42.188685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.188700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.075 [2024-07-15 08:00:42.188708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.188723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.075 [2024-07-15 08:00:42.188731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.188746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.075 [2024-07-15 08:00:42.188758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.188773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.075 [2024-07-15 08:00:42.188781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.188796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.075 [2024-07-15 08:00:42.188804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.188819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.075 [2024-07-15 08:00:42.188827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.188843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.075 [2024-07-15 08:00:42.188852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:13.075 [2024-07-15 08:00:42.188868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.075 [2024-07-15 08:00:42.188877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.188892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.188901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.188916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.188924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.188939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.188947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.188962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.188970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.188985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.188993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.189016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.189039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.189064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.189087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.189110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.189134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.189157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.189180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.189203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.189231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.189255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.189278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.189302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.189325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.189350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.189373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.189396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.189419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.189455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.189479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.189503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.189527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.189551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.189575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-07-15 08:00:42.189599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:13.076 [2024-07-15 08:00:42.189614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-07-15 08:00:42.189623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.189638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-07-15 08:00:42.189648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.189664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-07-15 08:00:42.189673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.189689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-07-15 08:00:42.189697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.189712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-07-15 08:00:42.189721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.189737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-07-15 08:00:42.189745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.189760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-07-15 08:00:42.189769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.189784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-07-15 08:00:42.189793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.189808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-07-15 08:00:42.189816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.189832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-07-15 08:00:42.189840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.189856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-07-15 08:00:42.189864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.189880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-07-15 08:00:42.189888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.189904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-07-15 08:00:42.189912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.189928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.077 [2024-07-15 08:00:42.189938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.189953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.077 [2024-07-15 08:00:42.189962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.189977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.077 [2024-07-15 08:00:42.189986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.190001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.077 [2024-07-15 08:00:42.190010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.190025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.077 [2024-07-15 08:00:42.190034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.190051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.077 [2024-07-15 08:00:42.190060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.190075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-07-15 08:00:42.190084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.190100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.077 [2024-07-15 08:00:42.190108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.190719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.077 [2024-07-15 08:00:42.190731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.190748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.077 [2024-07-15 08:00:42.190757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.190773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.077 [2024-07-15 08:00:42.190781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.190797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.077 [2024-07-15 08:00:42.190806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.190821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.077 [2024-07-15 08:00:42.190832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.190848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.077 [2024-07-15 08:00:42.190856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.190872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.077 [2024-07-15 08:00:42.190880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.190896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.077 [2024-07-15 08:00:42.190905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.190920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.077 [2024-07-15 08:00:42.190929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.190944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.077 [2024-07-15 08:00:42.190953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.190968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.077 [2024-07-15 08:00:42.190977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.190992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.077 [2024-07-15 08:00:42.191001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:13.077 [2024-07-15 08:00:42.191017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.078 [2024-07-15 08:00:42.191661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:13.078 [2024-07-15 08:00:42.191677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.079 [2024-07-15 08:00:42.191685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.191701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.079 [2024-07-15 08:00:42.191709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.191725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.079 [2024-07-15 08:00:42.191737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.191753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.079 [2024-07-15 08:00:42.191761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.191777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.079 [2024-07-15 08:00:42.191785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.191801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.079 [2024-07-15 08:00:42.191809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.191825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.079 [2024-07-15 08:00:42.191833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.191849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.079 [2024-07-15 08:00:42.191857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.191873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.079 [2024-07-15 08:00:42.191882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.191897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.079 [2024-07-15 08:00:42.191906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.191921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.079 [2024-07-15 08:00:42.191930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.191945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.079 [2024-07-15 08:00:42.191954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.191970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.079 [2024-07-15 08:00:42.191978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.191993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.079 [2024-07-15 08:00:42.192002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.192018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.079 [2024-07-15 08:00:42.192026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.192044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.079 [2024-07-15 08:00:42.192052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.192067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.079 [2024-07-15 08:00:42.192076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.192092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.079 [2024-07-15 08:00:42.192100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.192116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.079 [2024-07-15 08:00:42.192124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.192140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.079 [2024-07-15 08:00:42.192149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.192792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.079 [2024-07-15 08:00:42.192807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.192825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.079 [2024-07-15 08:00:42.192834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.192850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.079 [2024-07-15 08:00:42.192859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.192875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.079 [2024-07-15 08:00:42.192884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.192900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.079 [2024-07-15 08:00:42.192909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.192925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.079 [2024-07-15 08:00:42.192933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.192949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.079 [2024-07-15 08:00:42.192958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.192976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.079 [2024-07-15 08:00:42.192985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.193001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.079 [2024-07-15 08:00:42.193010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.193026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.079 [2024-07-15 08:00:42.193035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.193051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.079 [2024-07-15 08:00:42.193060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.193077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.079 [2024-07-15 08:00:42.193085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:13.079 [2024-07-15 08:00:42.193101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.080 [2024-07-15 08:00:42.193877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:13.080 [2024-07-15 08:00:42.193893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.081 [2024-07-15 08:00:42.193901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.193916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.081 [2024-07-15 08:00:42.193924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.193940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.081 [2024-07-15 08:00:42.193948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.193964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.081 [2024-07-15 08:00:42.193972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.193988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.081 [2024-07-15 08:00:42.193996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.194011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.081 [2024-07-15 08:00:42.194020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.194035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.081 [2024-07-15 08:00:42.194043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.194059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.081 [2024-07-15 08:00:42.194067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.194082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.081 [2024-07-15 08:00:42.194091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.194106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.081 [2024-07-15 08:00:42.194114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.194130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.081 [2024-07-15 08:00:42.194138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.194153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.081 [2024-07-15 08:00:42.194162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.194179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.081 [2024-07-15 08:00:42.194187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.194203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.081 [2024-07-15 08:00:42.194211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.194230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.081 [2024-07-15 08:00:42.194238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.194254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.081 [2024-07-15 08:00:42.194262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.194277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.081 [2024-07-15 08:00:42.194286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.194301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.081 [2024-07-15 08:00:42.194309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.194325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.081 [2024-07-15 08:00:42.194333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.194348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.081 [2024-07-15 08:00:42.194357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.194372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.081 [2024-07-15 08:00:42.194380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.194395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.081 [2024-07-15 08:00:42.194404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.194419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.081 [2024-07-15 08:00:42.194427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.194443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.081 [2024-07-15 08:00:42.194452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.195079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.081 [2024-07-15 08:00:42.195092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.195113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.081 [2024-07-15 08:00:42.195122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.195139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.081 [2024-07-15 08:00:42.195147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.195163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.081 [2024-07-15 08:00:42.195172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.195187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.081 [2024-07-15 08:00:42.195196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.195211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.081 [2024-07-15 08:00:42.195220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.195240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.081 [2024-07-15 08:00:42.195249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.195264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.081 [2024-07-15 08:00:42.195273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.195289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.081 [2024-07-15 08:00:42.195297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:13.081 [2024-07-15 08:00:42.195312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.195977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.195993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.196002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.196019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.196028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.196045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.196054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.196070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.196079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.196094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.196105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.196120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.196129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.196144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.196153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:13.082 [2024-07-15 08:00:42.196169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.082 [2024-07-15 08:00:42.196178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.196193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.083 [2024-07-15 08:00:42.196202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.196217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.083 [2024-07-15 08:00:42.196232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.196249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.083 [2024-07-15 08:00:42.196257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.196272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.083 [2024-07-15 08:00:42.196281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.196297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.083 [2024-07-15 08:00:42.196305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.196323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.083 [2024-07-15 08:00:42.196332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.196348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.083 [2024-07-15 08:00:42.196356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.196372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.083 [2024-07-15 08:00:42.196380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.196396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.083 [2024-07-15 08:00:42.196405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.196420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.083 [2024-07-15 08:00:42.196429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.200261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.083 [2024-07-15 08:00:42.200271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.200286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.083 [2024-07-15 08:00:42.200293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.200307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.083 [2024-07-15 08:00:42.200315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.200329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.083 [2024-07-15 08:00:42.200336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.200350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.083 [2024-07-15 08:00:42.200358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.200917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.083 [2024-07-15 08:00:42.200930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.200946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.083 [2024-07-15 08:00:42.200954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.200968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.083 [2024-07-15 08:00:42.200979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.200993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.083 [2024-07-15 08:00:42.201000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.201014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.083 [2024-07-15 08:00:42.201021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.201035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.083 [2024-07-15 08:00:42.201043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.201056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.083 [2024-07-15 08:00:42.201064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.201077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.083 [2024-07-15 08:00:42.201084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.201098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.083 [2024-07-15 08:00:42.201105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.201119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.083 [2024-07-15 08:00:42.201127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.201140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.083 [2024-07-15 08:00:42.201148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:13.083 [2024-07-15 08:00:42.201162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.084 [2024-07-15 08:00:42.201799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:13.084 [2024-07-15 08:00:42.201813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.085 [2024-07-15 08:00:42.201820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.201834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.085 [2024-07-15 08:00:42.201841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.201855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.085 [2024-07-15 08:00:42.201862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.201876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.085 [2024-07-15 08:00:42.201883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.201897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.085 [2024-07-15 08:00:42.201904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.201918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.085 [2024-07-15 08:00:42.201925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.201939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.085 [2024-07-15 08:00:42.201946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.201960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.085 [2024-07-15 08:00:42.201969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.201983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.085 [2024-07-15 08:00:42.201990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.202003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.085 [2024-07-15 08:00:42.202011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.202026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.085 [2024-07-15 08:00:42.202033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.202047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.085 [2024-07-15 08:00:42.202054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.202068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.085 [2024-07-15 08:00:42.202076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.202089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.085 [2024-07-15 08:00:42.202097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.202110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.085 [2024-07-15 08:00:42.202118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.202131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.085 [2024-07-15 08:00:42.202139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.202152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.085 [2024-07-15 08:00:42.202160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.202173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.085 [2024-07-15 08:00:42.202181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.202194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.085 [2024-07-15 08:00:42.202201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.202215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.085 [2024-07-15 08:00:42.202222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.202240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.085 [2024-07-15 08:00:42.202248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.202261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.085 [2024-07-15 08:00:42.202269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.202282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.085 [2024-07-15 08:00:42.202291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.202305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.085 [2024-07-15 08:00:42.202313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.202326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.085 [2024-07-15 08:00:42.202335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.202348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.085 [2024-07-15 08:00:42.202356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.202370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.085 [2024-07-15 08:00:42.202378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.202902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.085 [2024-07-15 08:00:42.202912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.202928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.085 [2024-07-15 08:00:42.202935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.202949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.085 [2024-07-15 08:00:42.202956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.202970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.085 [2024-07-15 08:00:42.202978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.202991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.085 [2024-07-15 08:00:42.202999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.203013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.085 [2024-07-15 08:00:42.203021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:13.085 [2024-07-15 08:00:42.203035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.085 [2024-07-15 08:00:42.203043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:13.086 [2024-07-15 08:00:42.203744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.086 [2024-07-15 08:00:42.203751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.203765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.087 [2024-07-15 08:00:42.203772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.203786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.087 [2024-07-15 08:00:42.203793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.203807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.087 [2024-07-15 08:00:42.203815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.203828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.087 [2024-07-15 08:00:42.203835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.203849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.087 [2024-07-15 08:00:42.203858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.203872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.087 [2024-07-15 08:00:42.203880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.203893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.087 [2024-07-15 08:00:42.203902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.203916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.087 [2024-07-15 08:00:42.203923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.203937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.087 [2024-07-15 08:00:42.203944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.203958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.087 [2024-07-15 08:00:42.203965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.203979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.087 [2024-07-15 08:00:42.203987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.204000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.087 [2024-07-15 08:00:42.204007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.204021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.087 [2024-07-15 08:00:42.204029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.204043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.087 [2024-07-15 08:00:42.204050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.204064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.087 [2024-07-15 08:00:42.204071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.204085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.087 [2024-07-15 08:00:42.204093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.204107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.087 [2024-07-15 08:00:42.204116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.204130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.087 [2024-07-15 08:00:42.204138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.204151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.087 [2024-07-15 08:00:42.204159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.204713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.087 [2024-07-15 08:00:42.204726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.204742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.087 [2024-07-15 08:00:42.204750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.204763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.087 [2024-07-15 08:00:42.204771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.204785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.087 [2024-07-15 08:00:42.204792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.204806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.087 [2024-07-15 08:00:42.204813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.204827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.087 [2024-07-15 08:00:42.204834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.204848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.087 [2024-07-15 08:00:42.204855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.204869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.087 [2024-07-15 08:00:42.204876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.204890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.087 [2024-07-15 08:00:42.204897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.204911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.087 [2024-07-15 08:00:42.204918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.204935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.087 [2024-07-15 08:00:42.204942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.204956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.087 [2024-07-15 08:00:42.204963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.204977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.087 [2024-07-15 08:00:42.204984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:13.087 [2024-07-15 08:00:42.204998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.087 [2024-07-15 08:00:42.205006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.088 [2024-07-15 08:00:42.205695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:13.088 [2024-07-15 08:00:42.205709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.089 [2024-07-15 08:00:42.205719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.205732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.089 [2024-07-15 08:00:42.205740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.205753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.089 [2024-07-15 08:00:42.205761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.205774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.089 [2024-07-15 08:00:42.205782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.205795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.089 [2024-07-15 08:00:42.205802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.205817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.089 [2024-07-15 08:00:42.205825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.205839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.089 [2024-07-15 08:00:42.205846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.205859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.089 [2024-07-15 08:00:42.205867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.205881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.089 [2024-07-15 08:00:42.205888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.205902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.089 [2024-07-15 08:00:42.205909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.205923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.089 [2024-07-15 08:00:42.205931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.205944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.089 [2024-07-15 08:00:42.205951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.205965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.089 [2024-07-15 08:00:42.205972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.205988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.089 [2024-07-15 08:00:42.205995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.206009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.089 [2024-07-15 08:00:42.206016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.206030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.089 [2024-07-15 08:00:42.206037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.206051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.089 [2024-07-15 08:00:42.206058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.206072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.089 [2024-07-15 08:00:42.206080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.206094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.089 [2024-07-15 08:00:42.206103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.206117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.089 [2024-07-15 08:00:42.206125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.206138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.089 [2024-07-15 08:00:42.206146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.206162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.089 [2024-07-15 08:00:42.206169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.206720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.089 [2024-07-15 08:00:42.206733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.206748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.089 [2024-07-15 08:00:42.206756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.206769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.089 [2024-07-15 08:00:42.206777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.206793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.089 [2024-07-15 08:00:42.206801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:13.089 [2024-07-15 08:00:42.206815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.206823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.206836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.206844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.206858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.206865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.206878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.206886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.206900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.206907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.206921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.206928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.206942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.206949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.206963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.206970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.206984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.206991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.207004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.207012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.207025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.207032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.207046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.207055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.207069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.207076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.207090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.207097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.207111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.207118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.207132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.207139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.207153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.207160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.207174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.207183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.207197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.207204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.207218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.207237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.207251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.207259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.207273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.207281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.207295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.207303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.207317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.207327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.207341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.207350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.207363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.207371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.207384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.207391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.207405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.207412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.207426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.207433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.207447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.207454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.207467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.207475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.207488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.207495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:13.090 [2024-07-15 08:00:42.207509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.090 [2024-07-15 08:00:42.207516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.207529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.091 [2024-07-15 08:00:42.207537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.207550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.091 [2024-07-15 08:00:42.207558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.207572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.091 [2024-07-15 08:00:42.207579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.207595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.091 [2024-07-15 08:00:42.207603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.207617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.091 [2024-07-15 08:00:42.207625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.207638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.091 [2024-07-15 08:00:42.207645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.207659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.091 [2024-07-15 08:00:42.207666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.207680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.091 [2024-07-15 08:00:42.207687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.207701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.091 [2024-07-15 08:00:42.207708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.207722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.091 [2024-07-15 08:00:42.207729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.207743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.091 [2024-07-15 08:00:42.207750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.207763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.091 [2024-07-15 08:00:42.207771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.207784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.091 [2024-07-15 08:00:42.207792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.207805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.091 [2024-07-15 08:00:42.207812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.207826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.091 [2024-07-15 08:00:42.207833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.207848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.091 [2024-07-15 08:00:42.207856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.207869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.091 [2024-07-15 08:00:42.207876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.207890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.091 [2024-07-15 08:00:42.207897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.207910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.091 [2024-07-15 08:00:42.207918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.207931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.091 [2024-07-15 08:00:42.207938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.207952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.091 [2024-07-15 08:00:42.207959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.207973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.091 [2024-07-15 08:00:42.207981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.208551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.091 [2024-07-15 08:00:42.208565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.208581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.091 [2024-07-15 08:00:42.208589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.208603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.091 [2024-07-15 08:00:42.208610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.208625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.091 [2024-07-15 08:00:42.208633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.208647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.091 [2024-07-15 08:00:42.208655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.208668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.091 [2024-07-15 08:00:42.208678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.208692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.091 [2024-07-15 08:00:42.208700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.208713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.091 [2024-07-15 08:00:42.208721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.208735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.091 [2024-07-15 08:00:42.208742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.208755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.091 [2024-07-15 08:00:42.208762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.208776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.091 [2024-07-15 08:00:42.208783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.208797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.091 [2024-07-15 08:00:42.208805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:13.091 [2024-07-15 08:00:42.208818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.091 [2024-07-15 08:00:42.208825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.208840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.208848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.208863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.208870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.208883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.208891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.208904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.208912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.208925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.208934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.208948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.208956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.208969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.208978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.208993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.209001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.209014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.209022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.209036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.209044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.209058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.209066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.209079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.209086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.209100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.209107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.209121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.209128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.209142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.209149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.209163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.209171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.209184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.209194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.209209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.209217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.209237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.209245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.209259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.209266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.209281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.209288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.209303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.209311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.209324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.209333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.209347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.209354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.209368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.209375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.209390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.209398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.209412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.209420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.209433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.209440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.209454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.209461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.209477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.092 [2024-07-15 08:00:42.209494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:13.092 [2024-07-15 08:00:42.209506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.093 [2024-07-15 08:00:42.209513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.209525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.093 [2024-07-15 08:00:42.209532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.209544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.093 [2024-07-15 08:00:42.209552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.209564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.093 [2024-07-15 08:00:42.209571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.209584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.093 [2024-07-15 08:00:42.209591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.209603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.093 [2024-07-15 08:00:42.209609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.209622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.093 [2024-07-15 08:00:42.209628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.209641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.093 [2024-07-15 08:00:42.209648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.209660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.093 [2024-07-15 08:00:42.209667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.209679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.093 [2024-07-15 08:00:42.209686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.209698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.093 [2024-07-15 08:00:42.209705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.209719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.093 [2024-07-15 08:00:42.209726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.209738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.093 [2024-07-15 08:00:42.209745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.209757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.093 [2024-07-15 08:00:42.209765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.209778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.093 [2024-07-15 08:00:42.209785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.209798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.093 [2024-07-15 08:00:42.209805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.209817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.093 [2024-07-15 08:00:42.209825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.209837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.093 [2024-07-15 08:00:42.209844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.209857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.093 [2024-07-15 08:00:42.209865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.209878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.093 [2024-07-15 08:00:42.209885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.209898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.093 [2024-07-15 08:00:42.209906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.209918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.093 [2024-07-15 08:00:42.209925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.209939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.093 [2024-07-15 08:00:42.209946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.209958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.093 [2024-07-15 08:00:42.209968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.209981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.093 [2024-07-15 08:00:42.209988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.210469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.093 [2024-07-15 08:00:42.210480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.210494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.093 [2024-07-15 08:00:42.210501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.210513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.093 [2024-07-15 08:00:42.210520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.210533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.093 [2024-07-15 08:00:42.210540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.210552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.093 [2024-07-15 08:00:42.210559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.210571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.093 [2024-07-15 08:00:42.210578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:13.093 [2024-07-15 08:00:42.210590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.210598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.210610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.210618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.210630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.210637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.210650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.210656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.210669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.210678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.210690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.210697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.210709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.210716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.210729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.210735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.210748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.210754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.210767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.210774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.210787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.210794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.210806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.210813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.210825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.210832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.210845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.210851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.210864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.210871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.210883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.210890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.210902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.210908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.210922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.210929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.210941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.210948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.210960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.210967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.210979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.210986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.210999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.211006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.211019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.211026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.211038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.211045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.211057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.211064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.211077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.211084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.211097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.211104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.211116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.211122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:13.094 [2024-07-15 08:00:42.211135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.094 [2024-07-15 08:00:42.211141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.211155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.095 [2024-07-15 08:00:42.211162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.211174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.095 [2024-07-15 08:00:42.211181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.211193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.095 [2024-07-15 08:00:42.211200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.211212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.095 [2024-07-15 08:00:42.211218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.211237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.095 [2024-07-15 08:00:42.211244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.211256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.095 [2024-07-15 08:00:42.211263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.211276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.095 [2024-07-15 08:00:42.211284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.211298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.095 [2024-07-15 08:00:42.211305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.211317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.095 [2024-07-15 08:00:42.211324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.211336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.095 [2024-07-15 08:00:42.211343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.211355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.095 [2024-07-15 08:00:42.211362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.211375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.095 [2024-07-15 08:00:42.211381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.211394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.095 [2024-07-15 08:00:42.211403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.211415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.095 [2024-07-15 08:00:42.211422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.211434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.095 [2024-07-15 08:00:42.211441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.211454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.095 [2024-07-15 08:00:42.211460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.211474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.095 [2024-07-15 08:00:42.211482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.211494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.095 [2024-07-15 08:00:42.211501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.211513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.095 [2024-07-15 08:00:42.211520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.211532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.095 [2024-07-15 08:00:42.211539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.211551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.095 [2024-07-15 08:00:42.211558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.211570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.095 [2024-07-15 08:00:42.211577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.211590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.095 [2024-07-15 08:00:42.211596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.211609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.095 [2024-07-15 08:00:42.211616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.212132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.095 [2024-07-15 08:00:42.212148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.212162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.095 [2024-07-15 08:00:42.212169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.212182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.095 [2024-07-15 08:00:42.212189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.212201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.095 [2024-07-15 08:00:42.212208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.212220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.095 [2024-07-15 08:00:42.212232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.212245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.095 [2024-07-15 08:00:42.212252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.212265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.095 [2024-07-15 08:00:42.212271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.212284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.095 [2024-07-15 08:00:42.212291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:13.095 [2024-07-15 08:00:42.212303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.096 [2024-07-15 08:00:42.212310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.096 [2024-07-15 08:00:42.212916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:13.096 [2024-07-15 08:00:42.212929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.212936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.212948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.212955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.212967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.212974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.212986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.212993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.213012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.213032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.213051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.213072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.213092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.213112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.213133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.213153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.213173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.213192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.213212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.213235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.213255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.213274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.213293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.213312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.213331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.213351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.213372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.213391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.213850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.097 [2024-07-15 08:00:42.213871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.097 [2024-07-15 08:00:42.213891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.097 [2024-07-15 08:00:42.213910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.097 [2024-07-15 08:00:42.213929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.097 [2024-07-15 08:00:42.213948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.097 [2024-07-15 08:00:42.213968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.097 [2024-07-15 08:00:42.213987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:13.097 [2024-07-15 08:00:42.213999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.098 [2024-07-15 08:00:42.214654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:13.098 [2024-07-15 08:00:42.214666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.099 [2024-07-15 08:00:42.214673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.214686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.099 [2024-07-15 08:00:42.214693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.214705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.099 [2024-07-15 08:00:42.214712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.214725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.099 [2024-07-15 08:00:42.214731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.214744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.099 [2024-07-15 08:00:42.214752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.214765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.099 [2024-07-15 08:00:42.214772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.214786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.099 [2024-07-15 08:00:42.214793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.214805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.099 [2024-07-15 08:00:42.214812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.214824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.099 [2024-07-15 08:00:42.214831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.214844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.099 [2024-07-15 08:00:42.214850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.215300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.099 [2024-07-15 08:00:42.215313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.215326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.099 [2024-07-15 08:00:42.215333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.215346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.099 [2024-07-15 08:00:42.215353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.215366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.099 [2024-07-15 08:00:42.215372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.215385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.099 [2024-07-15 08:00:42.215392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.215404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.099 [2024-07-15 08:00:42.215411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.215423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.099 [2024-07-15 08:00:42.215430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.215445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.099 [2024-07-15 08:00:42.215452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.215542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.099 [2024-07-15 08:00:42.215551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.215565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.099 [2024-07-15 08:00:42.215572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.215584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.099 [2024-07-15 08:00:42.215591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.215603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.099 [2024-07-15 08:00:42.215610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.215624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.099 [2024-07-15 08:00:42.215631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.215644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.099 [2024-07-15 08:00:42.215651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.215663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.099 [2024-07-15 08:00:42.215670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.215683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.099 [2024-07-15 08:00:42.215690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.215702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.099 [2024-07-15 08:00:42.215709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.215722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.099 [2024-07-15 08:00:42.215728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.215741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.099 [2024-07-15 08:00:42.215748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:13.099 [2024-07-15 08:00:42.215762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.099 [2024-07-15 08:00:42.215769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.215782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.100 [2024-07-15 08:00:42.215788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.215801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.215807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.215820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.215827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.215839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.215846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.215858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.215865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.215877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.215884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.215896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.215903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.215915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.215922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.215934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.215941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.215954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.215961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.215973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.215980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.215992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.216000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.216013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.216020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.216033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.216039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.216052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.216059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.216071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.216078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.216091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.216097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.216110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.216116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.216129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.216135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.216148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.216155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.216167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.216174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.216186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.216193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.216205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.216212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.216230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.216238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.219949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.219959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.219972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.219979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:13.100 [2024-07-15 08:00:42.219991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.100 [2024-07-15 08:00:42.219998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.101 [2024-07-15 08:00:42.220783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.101 [2024-07-15 08:00:42.220807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:13.101 [2024-07-15 08:00:42.220823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.220830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.220846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.220853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.220869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.220879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.220895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.220902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.220918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.220925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.220941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.102 [2024-07-15 08:00:42.220948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.220964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.220971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.220987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.220994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.221011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.221017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.221033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.221040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.221057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.221063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.221080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.221086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.221103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.221109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.221126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.221132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.221149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.221159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.221176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.221183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.221200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.221206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.221222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.221237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.221254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.221261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.221277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.221284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.221300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.221307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.221324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.221330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.221347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.221354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.221371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.221377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.221394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.221400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.221417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.221424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.221440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.221447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.221465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.221472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.221488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.221495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.221512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.221518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.221535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.221542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.221558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.221565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:13.102 [2024-07-15 08:00:42.221581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.102 [2024-07-15 08:00:42.221588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:42.221604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.103 [2024-07-15 08:00:42.221611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:42.221628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.103 [2024-07-15 08:00:42.221635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:42.221651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.103 [2024-07-15 08:00:42.221658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:42.221674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.103 [2024-07-15 08:00:42.221681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:42.221698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.103 [2024-07-15 08:00:42.221704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:42.221721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.103 [2024-07-15 08:00:42.221727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:42.221745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.103 [2024-07-15 08:00:42.221752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:42.221768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.103 [2024-07-15 08:00:42.221775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:42.221792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.103 [2024-07-15 08:00:42.221799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:42.221815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.103 [2024-07-15 08:00:42.221822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:42.221838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.103 [2024-07-15 08:00:42.221845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:42.221861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.103 [2024-07-15 08:00:42.221868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:42.221885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.103 [2024-07-15 08:00:42.221892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:42.221908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.103 [2024-07-15 08:00:42.221915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:42.221931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.103 [2024-07-15 08:00:42.221938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:42.221954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.103 [2024-07-15 08:00:42.221961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:42.221978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.103 [2024-07-15 08:00:42.221985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:42.222002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.103 [2024-07-15 08:00:42.222009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:42.222025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.103 [2024-07-15 08:00:42.222034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:42.222050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.103 [2024-07-15 08:00:42.222057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:42.222074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.103 [2024-07-15 08:00:42.222080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:42.222097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.103 [2024-07-15 08:00:42.222103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:42.222120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.103 [2024-07-15 08:00:42.222126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:42.222143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.103 [2024-07-15 08:00:42.222150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:42.222283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.103 [2024-07-15 08:00:42.222292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:55.275504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.103 [2024-07-15 08:00:55.275543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:55.275577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.103 [2024-07-15 08:00:55.275586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:55.275599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.103 [2024-07-15 08:00:55.275607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:55.275620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.103 [2024-07-15 08:00:55.275627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:55.275639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.103 [2024-07-15 08:00:55.275647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:55.275659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:108576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.103 [2024-07-15 08:00:55.275671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:55.275684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.103 [2024-07-15 08:00:55.275691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:13.103 [2024-07-15 08:00:55.275704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.103 [2024-07-15 08:00:55.275710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.275723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.275730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.275742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.275750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.275762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:108472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.275769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.275782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.275789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.275802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.275809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.276304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.104 [2024-07-15 08:00:55.276320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.276335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.276342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.276355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.276362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.276375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.276382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.276394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:108664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.276402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.276418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.276425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.276438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.276444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.276457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:108760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.276464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.276476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.276483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.276495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.104 [2024-07-15 08:00:55.276502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.276515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.276521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.276534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.276541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.276553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.276560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.276573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.276579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.276593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.276601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.276737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.276746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.276760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.276767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.276782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.276789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.276802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.276809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.276822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.276828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.276841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.276848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.276861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.276868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.276880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.276887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:13.104 [2024-07-15 08:00:55.276899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.104 [2024-07-15 08:00:55.276906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:13.104 Received shutdown signal, test time was about 27.612605 seconds 00:25:13.104 00:25:13.104 Latency(us) 00:25:13.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.104 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:13.104 Verification LBA range: start 0x0 length 0x4000 00:25:13.104 Nvme0n1 : 27.61 10291.56 40.20 0.00 0.00 12414.25 168.29 3078254.41 00:25:13.104 =================================================================================================================== 00:25:13.104 Total : 10291.56 40.20 0.00 0.00 12414.25 168.29 3078254.41 00:25:13.104 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:13.364 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:13.364 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:13.364 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:13.364 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:13.364 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:13.364 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:13.364 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:13.364 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:13.364 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:13.364 rmmod nvme_tcp 00:25:13.364 rmmod nvme_fabrics 00:25:13.364 rmmod nvme_keyring 00:25:13.364 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:13.364 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:13.364 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:13.364 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3363891 ']' 00:25:13.364 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3363891 00:25:13.364 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3363891 ']' 00:25:13.364 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3363891 00:25:13.364 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:25:13.364 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:13.364 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3363891 00:25:13.364 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:13.364 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:13.364 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3363891' 00:25:13.364 killing process with pid 3363891 00:25:13.364 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3363891 00:25:13.364 08:00:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3363891 00:25:13.622 08:00:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:13.622 08:00:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:13.622 08:00:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:13.622 08:00:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:13.622 08:00:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:13.622 08:00:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.622 08:00:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:13.623 08:00:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.525 08:01:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:15.525 00:25:15.525 real 0m40.091s 00:25:15.525 user 1m48.160s 00:25:15.525 sys 0m10.894s 00:25:15.525 08:01:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:15.525 08:01:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:15.525 ************************************ 00:25:15.525 END TEST nvmf_host_multipath_status 00:25:15.525 ************************************ 00:25:15.783 08:01:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:15.783 08:01:00 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:15.783 08:01:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:15.783 08:01:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:15.783 08:01:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:15.783 ************************************ 00:25:15.783 START TEST nvmf_discovery_remove_ifc 00:25:15.783 ************************************ 00:25:15.783 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:15.783 * Looking for test storage... 00:25:15.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:15.783 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:15.783 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:15.783 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.783 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.783 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.783 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.783 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.783 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.783 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.783 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.783 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.783 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.783 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:15.783 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:15.783 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.783 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.783 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:15.783 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:15.783 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:15.783 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.783 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.783 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.783 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.783 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:25:15.784 08:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:22.345 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:22.345 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:22.345 Found net devices under 0000:86:00.0: cvl_0_0 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:22.345 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:22.346 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.346 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:22.346 Found net devices under 0000:86:00.1: cvl_0_1 00:25:22.346 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.346 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:22.346 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:25:22.346 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:22.346 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:22.346 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:22.346 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:22.346 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:22.346 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:22.346 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:22.346 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:22.346 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:22.346 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:22.346 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:22.346 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:22.346 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:22.346 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:22.346 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:22.346 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:22.346 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:22.346 08:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:22.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:22.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:25:22.346 00:25:22.346 --- 10.0.0.2 ping statistics --- 00:25:22.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.346 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:22.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:22.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:25:22.346 00:25:22.346 --- 10.0.0.1 ping statistics --- 00:25:22.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.346 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3372676 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3372676 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3372676 ']' 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:22.346 08:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:22.346 [2024-07-15 08:01:06.205512] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:25:22.346 [2024-07-15 08:01:06.205554] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:22.346 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.346 [2024-07-15 08:01:06.275109] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.346 [2024-07-15 08:01:06.345782] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:22.346 [2024-07-15 08:01:06.345822] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:22.346 [2024-07-15 08:01:06.345829] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:22.346 [2024-07-15 08:01:06.345835] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:22.346 [2024-07-15 08:01:06.345841] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:22.346 [2024-07-15 08:01:06.345859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.346 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:22.346 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:25:22.346 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:22.346 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:22.346 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:22.346 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:22.346 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:22.346 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.346 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:22.346 [2024-07-15 08:01:07.056803] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:22.346 [2024-07-15 08:01:07.064941] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:22.346 null0 00:25:22.605 [2024-07-15 08:01:07.096943] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.605 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.605 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3372914 00:25:22.605 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:22.605 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3372914 /tmp/host.sock 00:25:22.605 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3372914 ']' 00:25:22.605 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:25:22.605 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:22.605 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:22.605 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:22.605 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:22.605 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:22.605 [2024-07-15 08:01:07.165373] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:25:22.605 [2024-07-15 08:01:07.165415] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3372914 ] 00:25:22.605 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.605 [2024-07-15 08:01:07.215131] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.605 [2024-07-15 08:01:07.294558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.573 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:23.573 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:25:23.573 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:23.573 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:23.573 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.573 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:23.574 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.574 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:23.574 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.574 08:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:23.574 08:01:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.574 08:01:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:23.574 08:01:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.574 08:01:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.507 [2024-07-15 08:01:09.114738] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:24.507 [2024-07-15 08:01:09.114758] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:24.507 [2024-07-15 08:01:09.114770] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:24.507 [2024-07-15 08:01:09.242174] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:24.765 [2024-07-15 08:01:09.427437] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:24.765 [2024-07-15 08:01:09.427478] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:24.765 [2024-07-15 08:01:09.427498] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:24.765 [2024-07-15 08:01:09.427510] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:24.765 [2024-07-15 08:01:09.427527] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:24.765 08:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.765 08:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:24.765 08:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:24.765 08:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.765 08:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.765 08:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:24.765 08:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.765 08:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:24.766 [2024-07-15 08:01:09.434138] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x124ae30 was disconnected and freed. delete nvme_qpair. 00:25:24.766 08:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:24.766 08:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.766 08:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:24.766 08:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:24.766 08:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:25.023 08:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:25.023 08:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:25.023 08:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.023 08:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:25.023 08:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.023 08:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:25.023 08:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:25.023 08:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:25.023 08:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.023 08:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:25.023 08:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:25.957 08:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:25.957 08:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.957 08:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:25.957 08:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.957 08:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:25.957 08:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:25.957 08:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:25.957 08:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.957 08:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:25.957 08:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:27.330 08:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:27.330 08:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.330 08:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:27.330 08:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.330 08:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:27.330 08:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:27.330 08:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:27.330 08:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.330 08:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:27.330 08:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:28.266 08:01:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:28.266 08:01:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.266 08:01:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:28.266 08:01:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.266 08:01:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:28.266 08:01:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:28.266 08:01:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:28.266 08:01:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.266 08:01:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:28.266 08:01:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:29.200 08:01:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:29.200 08:01:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.200 08:01:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:29.201 08:01:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.201 08:01:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:29.201 08:01:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:29.201 08:01:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:29.201 08:01:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.201 08:01:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:29.201 08:01:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:30.134 08:01:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:30.134 08:01:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.134 08:01:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:30.134 08:01:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.134 08:01:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:30.134 08:01:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:30.134 08:01:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:30.134 08:01:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.134 [2024-07-15 08:01:14.868820] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:30.134 [2024-07-15 08:01:14.868859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.134 [2024-07-15 08:01:14.868886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.134 [2024-07-15 08:01:14.868896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.134 [2024-07-15 08:01:14.868903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.134 [2024-07-15 08:01:14.868910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.134 [2024-07-15 08:01:14.868918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.134 [2024-07-15 08:01:14.868925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.134 [2024-07-15 08:01:14.868931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.134 [2024-07-15 08:01:14.868938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.134 [2024-07-15 08:01:14.868945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.134 [2024-07-15 08:01:14.868951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1211690 is same with the state(5) to be set 00:25:30.134 [2024-07-15 08:01:14.878842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1211690 (9): Bad file descriptor 00:25:30.392 [2024-07-15 08:01:14.888881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:30.392 08:01:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:30.392 08:01:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:31.329 08:01:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:31.329 08:01:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:31.329 08:01:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:31.329 08:01:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.329 08:01:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:31.329 08:01:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:31.329 08:01:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:31.329 [2024-07-15 08:01:15.938326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:31.329 [2024-07-15 08:01:15.938419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1211690 with addr=10.0.0.2, port=4420 00:25:31.329 [2024-07-15 08:01:15.938453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1211690 is same with the state(5) to be set 00:25:31.329 [2024-07-15 08:01:15.938509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1211690 (9): Bad file descriptor 00:25:31.329 [2024-07-15 08:01:15.939451] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:31.329 [2024-07-15 08:01:15.939501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:31.329 [2024-07-15 08:01:15.939522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:31.329 [2024-07-15 08:01:15.939544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:31.329 [2024-07-15 08:01:15.939604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.329 [2024-07-15 08:01:15.939628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:31.329 08:01:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.329 08:01:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:31.329 08:01:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:32.265 [2024-07-15 08:01:16.942127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:32.265 [2024-07-15 08:01:16.942148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:32.265 [2024-07-15 08:01:16.942155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:32.265 [2024-07-15 08:01:16.942161] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:25:32.265 [2024-07-15 08:01:16.942174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:32.265 [2024-07-15 08:01:16.942191] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:32.265 [2024-07-15 08:01:16.942212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.265 [2024-07-15 08:01:16.942221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.265 [2024-07-15 08:01:16.942249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.265 [2024-07-15 08:01:16.942256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.265 [2024-07-15 08:01:16.942262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.265 [2024-07-15 08:01:16.942269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.265 [2024-07-15 08:01:16.942276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.265 [2024-07-15 08:01:16.942282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.265 [2024-07-15 08:01:16.942289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.265 [2024-07-15 08:01:16.942295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.265 [2024-07-15 08:01:16.942301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:32.265 [2024-07-15 08:01:16.942821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1210a80 (9): Bad file descriptor 00:25:32.265 [2024-07-15 08:01:16.943832] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:32.266 [2024-07-15 08:01:16.943843] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:32.266 08:01:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:32.266 08:01:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.266 08:01:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:32.266 08:01:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.266 08:01:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:32.266 08:01:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:32.266 08:01:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:32.266 08:01:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.266 08:01:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:32.266 08:01:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:32.524 08:01:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:32.524 08:01:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:32.524 08:01:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:32.524 08:01:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.524 08:01:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:32.524 08:01:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.524 08:01:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:32.524 08:01:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:32.525 08:01:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:32.525 08:01:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.525 08:01:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:32.525 08:01:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:33.458 08:01:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:33.458 08:01:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.458 08:01:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:33.458 08:01:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.458 08:01:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:33.458 08:01:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:33.458 08:01:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:33.458 08:01:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.458 08:01:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:33.458 08:01:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:34.393 [2024-07-15 08:01:19.000722] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:34.393 [2024-07-15 08:01:19.000739] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:34.393 [2024-07-15 08:01:19.000751] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:34.393 [2024-07-15 08:01:19.128144] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:34.652 08:01:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:34.652 08:01:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.652 08:01:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:34.652 08:01:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.652 08:01:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:34.652 08:01:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:34.652 08:01:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:34.652 08:01:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.652 [2024-07-15 08:01:19.231666] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:34.652 [2024-07-15 08:01:19.231699] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:34.652 [2024-07-15 08:01:19.231717] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:34.652 [2024-07-15 08:01:19.231730] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:34.652 [2024-07-15 08:01:19.231737] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:34.652 [2024-07-15 08:01:19.239583] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x12278d0 was disconnected and freed. delete nvme_qpair. 00:25:34.652 08:01:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:34.652 08:01:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:35.590 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:35.590 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.590 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:35.590 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.590 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:35.590 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:35.590 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:35.590 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.590 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:35.590 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:35.590 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3372914 00:25:35.590 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3372914 ']' 00:25:35.590 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3372914 00:25:35.590 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:35.590 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:35.590 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3372914 00:25:35.590 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:35.590 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:35.590 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3372914' 00:25:35.590 killing process with pid 3372914 00:25:35.590 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3372914 00:25:35.590 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3372914 00:25:35.849 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:35.849 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:35.849 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:25:35.849 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:35.849 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:25:35.849 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:35.849 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:35.849 rmmod nvme_tcp 00:25:35.849 rmmod nvme_fabrics 00:25:35.849 rmmod nvme_keyring 00:25:35.849 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:35.849 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:25:35.849 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:25:35.849 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3372676 ']' 00:25:35.849 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3372676 00:25:35.849 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3372676 ']' 00:25:35.849 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3372676 00:25:35.849 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:35.849 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:35.849 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3372676 00:25:36.108 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:36.108 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:36.108 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3372676' 00:25:36.108 killing process with pid 3372676 00:25:36.108 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3372676 00:25:36.108 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3372676 00:25:36.108 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:36.108 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:36.108 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:36.108 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:36.108 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:36.108 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.108 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:36.108 08:01:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.647 08:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:38.647 00:25:38.647 real 0m22.550s 00:25:38.647 user 0m29.121s 00:25:38.647 sys 0m5.594s 00:25:38.647 08:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:38.647 08:01:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:38.647 ************************************ 00:25:38.647 END TEST nvmf_discovery_remove_ifc 00:25:38.647 ************************************ 00:25:38.647 08:01:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:38.647 08:01:22 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:38.647 08:01:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:38.647 08:01:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:38.647 08:01:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:38.647 ************************************ 00:25:38.647 START TEST nvmf_identify_kernel_target 00:25:38.647 ************************************ 00:25:38.647 08:01:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:38.647 * Looking for test storage... 00:25:38.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:25:38.647 08:01:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:43.940 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:43.940 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:43.940 Found net devices under 0000:86:00.0: cvl_0_0 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:43.940 Found net devices under 0000:86:00.1: cvl_0_1 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:43.940 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:43.941 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:43.941 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:43.941 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:43.941 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:43.941 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:43.941 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:43.941 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:43.941 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:43.941 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:43.941 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:43.941 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:44.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:44.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:25:44.217 00:25:44.217 --- 10.0.0.2 ping statistics --- 00:25:44.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.217 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:44.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:44.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:25:44.217 00:25:44.217 --- 10.0.0.1 ping statistics --- 00:25:44.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.217 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:44.217 08:01:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:47.504 Waiting for block devices as requested 00:25:47.504 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:47.504 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:47.504 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:47.504 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:47.504 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:47.504 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:47.504 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:47.504 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:47.504 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:47.762 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:47.762 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:47.762 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:48.022 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:48.022 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:48.022 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:48.022 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:48.281 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:48.281 08:01:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:48.281 08:01:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:48.281 08:01:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:48.281 08:01:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:48.281 08:01:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:48.281 08:01:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:48.281 08:01:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:48.281 08:01:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:48.281 08:01:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:48.281 No valid GPT data, bailing 00:25:48.281 08:01:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:48.281 08:01:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:48.281 08:01:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:48.281 08:01:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:48.281 08:01:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:48.281 08:01:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:48.281 08:01:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:48.281 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:48.281 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:48.281 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:48.281 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:48.281 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:48.281 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:48.281 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:25:48.281 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:48.281 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:48.281 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:48.281 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:48.540 00:25:48.540 Discovery Log Number of Records 2, Generation counter 2 00:25:48.540 =====Discovery Log Entry 0====== 00:25:48.540 trtype: tcp 00:25:48.540 adrfam: ipv4 00:25:48.540 subtype: current discovery subsystem 00:25:48.540 treq: not specified, sq flow control disable supported 00:25:48.540 portid: 1 00:25:48.540 trsvcid: 4420 00:25:48.540 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:48.540 traddr: 10.0.0.1 00:25:48.540 eflags: none 00:25:48.540 sectype: none 00:25:48.540 =====Discovery Log Entry 1====== 00:25:48.540 trtype: tcp 00:25:48.541 adrfam: ipv4 00:25:48.541 subtype: nvme subsystem 00:25:48.541 treq: not specified, sq flow control disable supported 00:25:48.541 portid: 1 00:25:48.541 trsvcid: 4420 00:25:48.541 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:48.541 traddr: 10.0.0.1 00:25:48.541 eflags: none 00:25:48.541 sectype: none 00:25:48.541 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:48.541 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:48.541 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.541 ===================================================== 00:25:48.541 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:48.541 ===================================================== 00:25:48.541 Controller Capabilities/Features 00:25:48.541 ================================ 00:25:48.541 Vendor ID: 0000 00:25:48.541 Subsystem Vendor ID: 0000 00:25:48.541 Serial Number: 633e8cedf98e0c094266 00:25:48.541 Model Number: Linux 00:25:48.541 Firmware Version: 6.7.0-68 00:25:48.541 Recommended Arb Burst: 0 00:25:48.541 IEEE OUI Identifier: 00 00 00 00:25:48.541 Multi-path I/O 00:25:48.541 May have multiple subsystem ports: No 00:25:48.541 May have multiple controllers: No 00:25:48.541 Associated with SR-IOV VF: No 00:25:48.541 Max Data Transfer Size: Unlimited 00:25:48.541 Max Number of Namespaces: 0 00:25:48.541 Max Number of I/O Queues: 1024 00:25:48.541 NVMe Specification Version (VS): 1.3 00:25:48.541 NVMe Specification Version (Identify): 1.3 00:25:48.541 Maximum Queue Entries: 1024 00:25:48.541 Contiguous Queues Required: No 00:25:48.541 Arbitration Mechanisms Supported 00:25:48.541 Weighted Round Robin: Not Supported 00:25:48.541 Vendor Specific: Not Supported 00:25:48.541 Reset Timeout: 7500 ms 00:25:48.541 Doorbell Stride: 4 bytes 00:25:48.541 NVM Subsystem Reset: Not Supported 00:25:48.541 Command Sets Supported 00:25:48.541 NVM Command Set: Supported 00:25:48.541 Boot Partition: Not Supported 00:25:48.541 Memory Page Size Minimum: 4096 bytes 00:25:48.541 Memory Page Size Maximum: 4096 bytes 00:25:48.541 Persistent Memory Region: Not Supported 00:25:48.541 Optional Asynchronous Events Supported 00:25:48.541 Namespace Attribute Notices: Not Supported 00:25:48.541 Firmware Activation Notices: Not Supported 00:25:48.541 ANA Change Notices: Not Supported 00:25:48.541 PLE Aggregate Log Change Notices: Not Supported 00:25:48.541 LBA Status Info Alert Notices: Not Supported 00:25:48.541 EGE Aggregate Log Change Notices: Not Supported 00:25:48.541 Normal NVM Subsystem Shutdown event: Not Supported 00:25:48.541 Zone Descriptor Change Notices: Not Supported 00:25:48.541 Discovery Log Change Notices: Supported 00:25:48.541 Controller Attributes 00:25:48.541 128-bit Host Identifier: Not Supported 00:25:48.541 Non-Operational Permissive Mode: Not Supported 00:25:48.541 NVM Sets: Not Supported 00:25:48.541 Read Recovery Levels: Not Supported 00:25:48.541 Endurance Groups: Not Supported 00:25:48.541 Predictable Latency Mode: Not Supported 00:25:48.541 Traffic Based Keep ALive: Not Supported 00:25:48.541 Namespace Granularity: Not Supported 00:25:48.541 SQ Associations: Not Supported 00:25:48.541 UUID List: Not Supported 00:25:48.541 Multi-Domain Subsystem: Not Supported 00:25:48.541 Fixed Capacity Management: Not Supported 00:25:48.541 Variable Capacity Management: Not Supported 00:25:48.541 Delete Endurance Group: Not Supported 00:25:48.541 Delete NVM Set: Not Supported 00:25:48.541 Extended LBA Formats Supported: Not Supported 00:25:48.541 Flexible Data Placement Supported: Not Supported 00:25:48.541 00:25:48.541 Controller Memory Buffer Support 00:25:48.541 ================================ 00:25:48.541 Supported: No 00:25:48.541 00:25:48.541 Persistent Memory Region Support 00:25:48.541 ================================ 00:25:48.541 Supported: No 00:25:48.541 00:25:48.541 Admin Command Set Attributes 00:25:48.541 ============================ 00:25:48.541 Security Send/Receive: Not Supported 00:25:48.541 Format NVM: Not Supported 00:25:48.541 Firmware Activate/Download: Not Supported 00:25:48.541 Namespace Management: Not Supported 00:25:48.541 Device Self-Test: Not Supported 00:25:48.541 Directives: Not Supported 00:25:48.541 NVMe-MI: Not Supported 00:25:48.541 Virtualization Management: Not Supported 00:25:48.541 Doorbell Buffer Config: Not Supported 00:25:48.541 Get LBA Status Capability: Not Supported 00:25:48.541 Command & Feature Lockdown Capability: Not Supported 00:25:48.541 Abort Command Limit: 1 00:25:48.541 Async Event Request Limit: 1 00:25:48.541 Number of Firmware Slots: N/A 00:25:48.541 Firmware Slot 1 Read-Only: N/A 00:25:48.541 Firmware Activation Without Reset: N/A 00:25:48.541 Multiple Update Detection Support: N/A 00:25:48.541 Firmware Update Granularity: No Information Provided 00:25:48.541 Per-Namespace SMART Log: No 00:25:48.541 Asymmetric Namespace Access Log Page: Not Supported 00:25:48.541 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:48.541 Command Effects Log Page: Not Supported 00:25:48.541 Get Log Page Extended Data: Supported 00:25:48.541 Telemetry Log Pages: Not Supported 00:25:48.541 Persistent Event Log Pages: Not Supported 00:25:48.541 Supported Log Pages Log Page: May Support 00:25:48.541 Commands Supported & Effects Log Page: Not Supported 00:25:48.541 Feature Identifiers & Effects Log Page:May Support 00:25:48.541 NVMe-MI Commands & Effects Log Page: May Support 00:25:48.541 Data Area 4 for Telemetry Log: Not Supported 00:25:48.541 Error Log Page Entries Supported: 1 00:25:48.541 Keep Alive: Not Supported 00:25:48.541 00:25:48.541 NVM Command Set Attributes 00:25:48.541 ========================== 00:25:48.541 Submission Queue Entry Size 00:25:48.541 Max: 1 00:25:48.541 Min: 1 00:25:48.541 Completion Queue Entry Size 00:25:48.541 Max: 1 00:25:48.541 Min: 1 00:25:48.541 Number of Namespaces: 0 00:25:48.541 Compare Command: Not Supported 00:25:48.541 Write Uncorrectable Command: Not Supported 00:25:48.541 Dataset Management Command: Not Supported 00:25:48.541 Write Zeroes Command: Not Supported 00:25:48.541 Set Features Save Field: Not Supported 00:25:48.541 Reservations: Not Supported 00:25:48.541 Timestamp: Not Supported 00:25:48.541 Copy: Not Supported 00:25:48.541 Volatile Write Cache: Not Present 00:25:48.541 Atomic Write Unit (Normal): 1 00:25:48.541 Atomic Write Unit (PFail): 1 00:25:48.541 Atomic Compare & Write Unit: 1 00:25:48.541 Fused Compare & Write: Not Supported 00:25:48.541 Scatter-Gather List 00:25:48.541 SGL Command Set: Supported 00:25:48.541 SGL Keyed: Not Supported 00:25:48.541 SGL Bit Bucket Descriptor: Not Supported 00:25:48.541 SGL Metadata Pointer: Not Supported 00:25:48.541 Oversized SGL: Not Supported 00:25:48.541 SGL Metadata Address: Not Supported 00:25:48.541 SGL Offset: Supported 00:25:48.541 Transport SGL Data Block: Not Supported 00:25:48.541 Replay Protected Memory Block: Not Supported 00:25:48.541 00:25:48.541 Firmware Slot Information 00:25:48.541 ========================= 00:25:48.541 Active slot: 0 00:25:48.541 00:25:48.541 00:25:48.541 Error Log 00:25:48.541 ========= 00:25:48.541 00:25:48.541 Active Namespaces 00:25:48.541 ================= 00:25:48.541 Discovery Log Page 00:25:48.541 ================== 00:25:48.541 Generation Counter: 2 00:25:48.541 Number of Records: 2 00:25:48.541 Record Format: 0 00:25:48.541 00:25:48.541 Discovery Log Entry 0 00:25:48.541 ---------------------- 00:25:48.541 Transport Type: 3 (TCP) 00:25:48.541 Address Family: 1 (IPv4) 00:25:48.541 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:48.541 Entry Flags: 00:25:48.541 Duplicate Returned Information: 0 00:25:48.541 Explicit Persistent Connection Support for Discovery: 0 00:25:48.541 Transport Requirements: 00:25:48.541 Secure Channel: Not Specified 00:25:48.541 Port ID: 1 (0x0001) 00:25:48.541 Controller ID: 65535 (0xffff) 00:25:48.541 Admin Max SQ Size: 32 00:25:48.541 Transport Service Identifier: 4420 00:25:48.541 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:48.541 Transport Address: 10.0.0.1 00:25:48.541 Discovery Log Entry 1 00:25:48.541 ---------------------- 00:25:48.541 Transport Type: 3 (TCP) 00:25:48.541 Address Family: 1 (IPv4) 00:25:48.541 Subsystem Type: 2 (NVM Subsystem) 00:25:48.541 Entry Flags: 00:25:48.541 Duplicate Returned Information: 0 00:25:48.541 Explicit Persistent Connection Support for Discovery: 0 00:25:48.541 Transport Requirements: 00:25:48.541 Secure Channel: Not Specified 00:25:48.541 Port ID: 1 (0x0001) 00:25:48.541 Controller ID: 65535 (0xffff) 00:25:48.541 Admin Max SQ Size: 32 00:25:48.541 Transport Service Identifier: 4420 00:25:48.541 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:48.541 Transport Address: 10.0.0.1 00:25:48.542 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:48.542 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.542 get_feature(0x01) failed 00:25:48.542 get_feature(0x02) failed 00:25:48.542 get_feature(0x04) failed 00:25:48.542 ===================================================== 00:25:48.542 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:48.542 ===================================================== 00:25:48.542 Controller Capabilities/Features 00:25:48.542 ================================ 00:25:48.542 Vendor ID: 0000 00:25:48.542 Subsystem Vendor ID: 0000 00:25:48.542 Serial Number: 4d3d8733f59ce735f2ef 00:25:48.542 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:48.542 Firmware Version: 6.7.0-68 00:25:48.542 Recommended Arb Burst: 6 00:25:48.542 IEEE OUI Identifier: 00 00 00 00:25:48.542 Multi-path I/O 00:25:48.542 May have multiple subsystem ports: Yes 00:25:48.542 May have multiple controllers: Yes 00:25:48.542 Associated with SR-IOV VF: No 00:25:48.542 Max Data Transfer Size: Unlimited 00:25:48.542 Max Number of Namespaces: 1024 00:25:48.542 Max Number of I/O Queues: 128 00:25:48.542 NVMe Specification Version (VS): 1.3 00:25:48.542 NVMe Specification Version (Identify): 1.3 00:25:48.542 Maximum Queue Entries: 1024 00:25:48.542 Contiguous Queues Required: No 00:25:48.542 Arbitration Mechanisms Supported 00:25:48.542 Weighted Round Robin: Not Supported 00:25:48.542 Vendor Specific: Not Supported 00:25:48.542 Reset Timeout: 7500 ms 00:25:48.542 Doorbell Stride: 4 bytes 00:25:48.542 NVM Subsystem Reset: Not Supported 00:25:48.542 Command Sets Supported 00:25:48.542 NVM Command Set: Supported 00:25:48.542 Boot Partition: Not Supported 00:25:48.542 Memory Page Size Minimum: 4096 bytes 00:25:48.542 Memory Page Size Maximum: 4096 bytes 00:25:48.542 Persistent Memory Region: Not Supported 00:25:48.542 Optional Asynchronous Events Supported 00:25:48.542 Namespace Attribute Notices: Supported 00:25:48.542 Firmware Activation Notices: Not Supported 00:25:48.542 ANA Change Notices: Supported 00:25:48.542 PLE Aggregate Log Change Notices: Not Supported 00:25:48.542 LBA Status Info Alert Notices: Not Supported 00:25:48.542 EGE Aggregate Log Change Notices: Not Supported 00:25:48.542 Normal NVM Subsystem Shutdown event: Not Supported 00:25:48.542 Zone Descriptor Change Notices: Not Supported 00:25:48.542 Discovery Log Change Notices: Not Supported 00:25:48.542 Controller Attributes 00:25:48.542 128-bit Host Identifier: Supported 00:25:48.542 Non-Operational Permissive Mode: Not Supported 00:25:48.542 NVM Sets: Not Supported 00:25:48.542 Read Recovery Levels: Not Supported 00:25:48.542 Endurance Groups: Not Supported 00:25:48.542 Predictable Latency Mode: Not Supported 00:25:48.542 Traffic Based Keep ALive: Supported 00:25:48.542 Namespace Granularity: Not Supported 00:25:48.542 SQ Associations: Not Supported 00:25:48.542 UUID List: Not Supported 00:25:48.542 Multi-Domain Subsystem: Not Supported 00:25:48.542 Fixed Capacity Management: Not Supported 00:25:48.542 Variable Capacity Management: Not Supported 00:25:48.542 Delete Endurance Group: Not Supported 00:25:48.542 Delete NVM Set: Not Supported 00:25:48.542 Extended LBA Formats Supported: Not Supported 00:25:48.542 Flexible Data Placement Supported: Not Supported 00:25:48.542 00:25:48.542 Controller Memory Buffer Support 00:25:48.542 ================================ 00:25:48.542 Supported: No 00:25:48.542 00:25:48.542 Persistent Memory Region Support 00:25:48.542 ================================ 00:25:48.542 Supported: No 00:25:48.542 00:25:48.542 Admin Command Set Attributes 00:25:48.542 ============================ 00:25:48.542 Security Send/Receive: Not Supported 00:25:48.542 Format NVM: Not Supported 00:25:48.542 Firmware Activate/Download: Not Supported 00:25:48.542 Namespace Management: Not Supported 00:25:48.542 Device Self-Test: Not Supported 00:25:48.542 Directives: Not Supported 00:25:48.542 NVMe-MI: Not Supported 00:25:48.542 Virtualization Management: Not Supported 00:25:48.542 Doorbell Buffer Config: Not Supported 00:25:48.542 Get LBA Status Capability: Not Supported 00:25:48.542 Command & Feature Lockdown Capability: Not Supported 00:25:48.542 Abort Command Limit: 4 00:25:48.542 Async Event Request Limit: 4 00:25:48.542 Number of Firmware Slots: N/A 00:25:48.542 Firmware Slot 1 Read-Only: N/A 00:25:48.542 Firmware Activation Without Reset: N/A 00:25:48.542 Multiple Update Detection Support: N/A 00:25:48.542 Firmware Update Granularity: No Information Provided 00:25:48.542 Per-Namespace SMART Log: Yes 00:25:48.542 Asymmetric Namespace Access Log Page: Supported 00:25:48.542 ANA Transition Time : 10 sec 00:25:48.542 00:25:48.542 Asymmetric Namespace Access Capabilities 00:25:48.542 ANA Optimized State : Supported 00:25:48.542 ANA Non-Optimized State : Supported 00:25:48.543 ANA Inaccessible State : Supported 00:25:48.543 ANA Persistent Loss State : Supported 00:25:48.543 ANA Change State : Supported 00:25:48.543 ANAGRPID is not changed : No 00:25:48.543 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:48.543 00:25:48.543 ANA Group Identifier Maximum : 128 00:25:48.543 Number of ANA Group Identifiers : 128 00:25:48.543 Max Number of Allowed Namespaces : 1024 00:25:48.543 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:48.543 Command Effects Log Page: Supported 00:25:48.543 Get Log Page Extended Data: Supported 00:25:48.543 Telemetry Log Pages: Not Supported 00:25:48.543 Persistent Event Log Pages: Not Supported 00:25:48.543 Supported Log Pages Log Page: May Support 00:25:48.543 Commands Supported & Effects Log Page: Not Supported 00:25:48.543 Feature Identifiers & Effects Log Page:May Support 00:25:48.543 NVMe-MI Commands & Effects Log Page: May Support 00:25:48.543 Data Area 4 for Telemetry Log: Not Supported 00:25:48.543 Error Log Page Entries Supported: 128 00:25:48.543 Keep Alive: Supported 00:25:48.543 Keep Alive Granularity: 1000 ms 00:25:48.543 00:25:48.543 NVM Command Set Attributes 00:25:48.543 ========================== 00:25:48.543 Submission Queue Entry Size 00:25:48.543 Max: 64 00:25:48.543 Min: 64 00:25:48.543 Completion Queue Entry Size 00:25:48.543 Max: 16 00:25:48.543 Min: 16 00:25:48.543 Number of Namespaces: 1024 00:25:48.543 Compare Command: Not Supported 00:25:48.543 Write Uncorrectable Command: Not Supported 00:25:48.543 Dataset Management Command: Supported 00:25:48.543 Write Zeroes Command: Supported 00:25:48.543 Set Features Save Field: Not Supported 00:25:48.543 Reservations: Not Supported 00:25:48.543 Timestamp: Not Supported 00:25:48.543 Copy: Not Supported 00:25:48.543 Volatile Write Cache: Present 00:25:48.543 Atomic Write Unit (Normal): 1 00:25:48.543 Atomic Write Unit (PFail): 1 00:25:48.543 Atomic Compare & Write Unit: 1 00:25:48.543 Fused Compare & Write: Not Supported 00:25:48.543 Scatter-Gather List 00:25:48.543 SGL Command Set: Supported 00:25:48.543 SGL Keyed: Not Supported 00:25:48.543 SGL Bit Bucket Descriptor: Not Supported 00:25:48.543 SGL Metadata Pointer: Not Supported 00:25:48.543 Oversized SGL: Not Supported 00:25:48.543 SGL Metadata Address: Not Supported 00:25:48.543 SGL Offset: Supported 00:25:48.543 Transport SGL Data Block: Not Supported 00:25:48.543 Replay Protected Memory Block: Not Supported 00:25:48.543 00:25:48.543 Firmware Slot Information 00:25:48.543 ========================= 00:25:48.543 Active slot: 0 00:25:48.543 00:25:48.543 Asymmetric Namespace Access 00:25:48.543 =========================== 00:25:48.543 Change Count : 0 00:25:48.543 Number of ANA Group Descriptors : 1 00:25:48.543 ANA Group Descriptor : 0 00:25:48.543 ANA Group ID : 1 00:25:48.543 Number of NSID Values : 1 00:25:48.543 Change Count : 0 00:25:48.543 ANA State : 1 00:25:48.543 Namespace Identifier : 1 00:25:48.543 00:25:48.543 Commands Supported and Effects 00:25:48.543 ============================== 00:25:48.543 Admin Commands 00:25:48.543 -------------- 00:25:48.543 Get Log Page (02h): Supported 00:25:48.543 Identify (06h): Supported 00:25:48.543 Abort (08h): Supported 00:25:48.543 Set Features (09h): Supported 00:25:48.543 Get Features (0Ah): Supported 00:25:48.543 Asynchronous Event Request (0Ch): Supported 00:25:48.543 Keep Alive (18h): Supported 00:25:48.543 I/O Commands 00:25:48.543 ------------ 00:25:48.543 Flush (00h): Supported 00:25:48.543 Write (01h): Supported LBA-Change 00:25:48.543 Read (02h): Supported 00:25:48.544 Write Zeroes (08h): Supported LBA-Change 00:25:48.544 Dataset Management (09h): Supported 00:25:48.544 00:25:48.544 Error Log 00:25:48.544 ========= 00:25:48.544 Entry: 0 00:25:48.544 Error Count: 0x3 00:25:48.544 Submission Queue Id: 0x0 00:25:48.544 Command Id: 0x5 00:25:48.544 Phase Bit: 0 00:25:48.544 Status Code: 0x2 00:25:48.544 Status Code Type: 0x0 00:25:48.544 Do Not Retry: 1 00:25:48.544 Error Location: 0x28 00:25:48.544 LBA: 0x0 00:25:48.544 Namespace: 0x0 00:25:48.544 Vendor Log Page: 0x0 00:25:48.544 ----------- 00:25:48.544 Entry: 1 00:25:48.544 Error Count: 0x2 00:25:48.544 Submission Queue Id: 0x0 00:25:48.544 Command Id: 0x5 00:25:48.544 Phase Bit: 0 00:25:48.544 Status Code: 0x2 00:25:48.544 Status Code Type: 0x0 00:25:48.544 Do Not Retry: 1 00:25:48.544 Error Location: 0x28 00:25:48.544 LBA: 0x0 00:25:48.544 Namespace: 0x0 00:25:48.544 Vendor Log Page: 0x0 00:25:48.544 ----------- 00:25:48.544 Entry: 2 00:25:48.544 Error Count: 0x1 00:25:48.544 Submission Queue Id: 0x0 00:25:48.544 Command Id: 0x4 00:25:48.544 Phase Bit: 0 00:25:48.544 Status Code: 0x2 00:25:48.544 Status Code Type: 0x0 00:25:48.544 Do Not Retry: 1 00:25:48.544 Error Location: 0x28 00:25:48.544 LBA: 0x0 00:25:48.544 Namespace: 0x0 00:25:48.544 Vendor Log Page: 0x0 00:25:48.544 00:25:48.544 Number of Queues 00:25:48.544 ================ 00:25:48.544 Number of I/O Submission Queues: 128 00:25:48.544 Number of I/O Completion Queues: 128 00:25:48.544 00:25:48.544 ZNS Specific Controller Data 00:25:48.544 ============================ 00:25:48.544 Zone Append Size Limit: 0 00:25:48.544 00:25:48.544 00:25:48.544 Active Namespaces 00:25:48.544 ================= 00:25:48.544 get_feature(0x05) failed 00:25:48.544 Namespace ID:1 00:25:48.544 Command Set Identifier: NVM (00h) 00:25:48.544 Deallocate: Supported 00:25:48.544 Deallocated/Unwritten Error: Not Supported 00:25:48.544 Deallocated Read Value: Unknown 00:25:48.544 Deallocate in Write Zeroes: Not Supported 00:25:48.544 Deallocated Guard Field: 0xFFFF 00:25:48.544 Flush: Supported 00:25:48.544 Reservation: Not Supported 00:25:48.544 Namespace Sharing Capabilities: Multiple Controllers 00:25:48.544 Size (in LBAs): 1953525168 (931GiB) 00:25:48.544 Capacity (in LBAs): 1953525168 (931GiB) 00:25:48.544 Utilization (in LBAs): 1953525168 (931GiB) 00:25:48.544 UUID: 5d3bd077-53c4-441c-98b5-227af33b4d18 00:25:48.544 Thin Provisioning: Not Supported 00:25:48.545 Per-NS Atomic Units: Yes 00:25:48.545 Atomic Boundary Size (Normal): 0 00:25:48.545 Atomic Boundary Size (PFail): 0 00:25:48.545 Atomic Boundary Offset: 0 00:25:48.545 NGUID/EUI64 Never Reused: No 00:25:48.545 ANA group ID: 1 00:25:48.545 Namespace Write Protected: No 00:25:48.545 Number of LBA Formats: 1 00:25:48.545 Current LBA Format: LBA Format #00 00:25:48.545 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:48.545 00:25:48.545 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:48.545 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:48.545 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:48.545 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:48.545 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:48.545 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:48.545 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:48.545 rmmod nvme_tcp 00:25:48.804 rmmod nvme_fabrics 00:25:48.804 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:48.804 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:48.804 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:48.804 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:48.804 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:48.804 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:48.804 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:48.804 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:48.804 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:48.804 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.804 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:48.804 08:01:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.709 08:01:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:50.709 08:01:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:50.709 08:01:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:50.709 08:01:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:50.709 08:01:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:50.709 08:01:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:50.709 08:01:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:50.709 08:01:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:50.709 08:01:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:50.709 08:01:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:50.709 08:01:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:53.996 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:53.996 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:53.996 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:53.996 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:53.996 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:53.996 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:53.996 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:53.996 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:53.996 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:53.996 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:53.996 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:53.996 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:53.996 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:53.996 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:53.996 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:53.996 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:54.565 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:54.565 00:25:54.565 real 0m16.344s 00:25:54.565 user 0m4.023s 00:25:54.565 sys 0m8.627s 00:25:54.565 08:01:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:54.565 08:01:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:54.565 ************************************ 00:25:54.565 END TEST nvmf_identify_kernel_target 00:25:54.565 ************************************ 00:25:54.825 08:01:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:54.825 08:01:39 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:54.825 08:01:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:54.825 08:01:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:54.825 08:01:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:54.825 ************************************ 00:25:54.825 START TEST nvmf_auth_host 00:25:54.825 ************************************ 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:54.825 * Looking for test storage... 00:25:54.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:54.825 08:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:01.392 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:01.392 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:01.392 Found net devices under 0000:86:00.0: cvl_0_0 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:01.392 Found net devices under 0000:86:00.1: cvl_0_1 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:01.392 08:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:01.392 08:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:01.392 08:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:01.392 08:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:01.392 08:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:01.392 08:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:01.392 08:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:01.392 08:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:01.392 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:01.392 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:26:01.392 00:26:01.392 --- 10.0.0.2 ping statistics --- 00:26:01.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.392 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:26:01.392 08:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:01.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:01.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:26:01.392 00:26:01.392 --- 10.0.0.1 ping statistics --- 00:26:01.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.392 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:26:01.393 08:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:01.393 08:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:26:01.393 08:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:01.393 08:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:01.393 08:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:01.393 08:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:01.393 08:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:01.393 08:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:01.393 08:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:01.393 08:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:01.393 08:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:01.393 08:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:01.393 08:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.393 08:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3384803 00:26:01.393 08:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:01.393 08:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3384803 00:26:01.393 08:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3384803 ']' 00:26:01.393 08:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.393 08:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:01.393 08:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.393 08:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:01.393 08:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.393 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:01.393 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:26:01.393 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:01.393 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:01.393 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.393 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.393 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:01.393 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:01.393 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:01.393 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:01.393 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:01.393 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:01.393 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:01.393 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:01.393 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5d58359200506d199ee6e01b50385f10 00:26:01.393 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:01.393 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.0PZ 00:26:01.393 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5d58359200506d199ee6e01b50385f10 0 00:26:01.393 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5d58359200506d199ee6e01b50385f10 0 00:26:01.393 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:01.393 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:01.393 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5d58359200506d199ee6e01b50385f10 00:26:01.393 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:01.393 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.0PZ 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.0PZ 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.0PZ 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e5586315d01ff5e3128fcf8c39c58277190108066848f6f50020082f5311cff2 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.qjN 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e5586315d01ff5e3128fcf8c39c58277190108066848f6f50020082f5311cff2 3 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e5586315d01ff5e3128fcf8c39c58277190108066848f6f50020082f5311cff2 3 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e5586315d01ff5e3128fcf8c39c58277190108066848f6f50020082f5311cff2 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.qjN 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.qjN 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.qjN 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=10aa47216d45ab0c2d432404178353065aa984bae7879f10 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Evw 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 10aa47216d45ab0c2d432404178353065aa984bae7879f10 0 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 10aa47216d45ab0c2d432404178353065aa984bae7879f10 0 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=10aa47216d45ab0c2d432404178353065aa984bae7879f10 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Evw 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Evw 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Evw 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=78f58a5ea29751cd2d3f54ec69758d57d629f6f70aea292c 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.s5I 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 78f58a5ea29751cd2d3f54ec69758d57d629f6f70aea292c 2 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 78f58a5ea29751cd2d3f54ec69758d57d629f6f70aea292c 2 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=78f58a5ea29751cd2d3f54ec69758d57d629f6f70aea292c 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.s5I 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.s5I 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.s5I 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ef75c5d3cea989168cbdf7b72b66c3b2 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.I8H 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ef75c5d3cea989168cbdf7b72b66c3b2 1 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ef75c5d3cea989168cbdf7b72b66c3b2 1 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ef75c5d3cea989168cbdf7b72b66c3b2 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.I8H 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.I8H 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.I8H 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:01.652 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2c4ce6928a5a0ceb3b3cbd8222abcf7c 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.aeW 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2c4ce6928a5a0ceb3b3cbd8222abcf7c 1 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2c4ce6928a5a0ceb3b3cbd8222abcf7c 1 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2c4ce6928a5a0ceb3b3cbd8222abcf7c 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.aeW 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.aeW 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.aeW 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0d672039e428e6e0d573072ae6a9ea22c85c7ae75e3fbe4f 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.A9s 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0d672039e428e6e0d573072ae6a9ea22c85c7ae75e3fbe4f 2 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0d672039e428e6e0d573072ae6a9ea22c85c7ae75e3fbe4f 2 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0d672039e428e6e0d573072ae6a9ea22c85c7ae75e3fbe4f 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.A9s 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.A9s 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.A9s 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=892481b05472739239a089be1a425e22 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.wlg 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 892481b05472739239a089be1a425e22 0 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 892481b05472739239a089be1a425e22 0 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=892481b05472739239a089be1a425e22 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.wlg 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.wlg 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.wlg 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:01.911 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:01.912 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:01.912 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=17eeeb178b9f01e6a8541992390c2dec7a8f7196b3433b32abcf3fe2235bf15a 00:26:01.912 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:01.912 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.cZi 00:26:01.912 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 17eeeb178b9f01e6a8541992390c2dec7a8f7196b3433b32abcf3fe2235bf15a 3 00:26:01.912 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 17eeeb178b9f01e6a8541992390c2dec7a8f7196b3433b32abcf3fe2235bf15a 3 00:26:01.912 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:01.912 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:01.912 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=17eeeb178b9f01e6a8541992390c2dec7a8f7196b3433b32abcf3fe2235bf15a 00:26:01.912 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:01.912 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:01.912 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.cZi 00:26:01.912 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.cZi 00:26:01.912 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.cZi 00:26:01.912 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:01.912 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3384803 00:26:01.912 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3384803 ']' 00:26:01.912 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.912 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:01.912 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.912 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:01.912 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.170 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:02.170 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:26:02.170 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:02.170 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.0PZ 00:26:02.170 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.170 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.170 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.170 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.qjN ]] 00:26:02.170 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qjN 00:26:02.170 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.170 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.170 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.170 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:02.170 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Evw 00:26:02.170 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.170 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.170 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.170 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.s5I ]] 00:26:02.170 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.s5I 00:26:02.170 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.170 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.170 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.170 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:02.170 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.I8H 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.aeW ]] 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.aeW 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.A9s 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.wlg ]] 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.wlg 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.cZi 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:02.171 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:02.429 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:02.429 08:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:04.961 Waiting for block devices as requested 00:26:04.961 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:04.961 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:05.220 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:05.220 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:05.220 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:05.220 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:05.478 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:05.478 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:05.478 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:05.478 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:05.738 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:05.738 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:05.738 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:05.998 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:05.998 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:05.998 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:05.998 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:06.934 No valid GPT data, bailing 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:06.934 00:26:06.934 Discovery Log Number of Records 2, Generation counter 2 00:26:06.934 =====Discovery Log Entry 0====== 00:26:06.934 trtype: tcp 00:26:06.934 adrfam: ipv4 00:26:06.934 subtype: current discovery subsystem 00:26:06.934 treq: not specified, sq flow control disable supported 00:26:06.934 portid: 1 00:26:06.934 trsvcid: 4420 00:26:06.934 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:06.934 traddr: 10.0.0.1 00:26:06.934 eflags: none 00:26:06.934 sectype: none 00:26:06.934 =====Discovery Log Entry 1====== 00:26:06.934 trtype: tcp 00:26:06.934 adrfam: ipv4 00:26:06.934 subtype: nvme subsystem 00:26:06.934 treq: not specified, sq flow control disable supported 00:26:06.934 portid: 1 00:26:06.934 trsvcid: 4420 00:26:06.934 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:06.934 traddr: 10.0.0.1 00:26:06.934 eflags: none 00:26:06.934 sectype: none 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: ]] 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:06.934 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.935 nvme0n1 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.935 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: ]] 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.194 nvme0n1 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.194 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.453 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.453 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.453 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:07.453 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.453 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.453 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:07.453 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:07.453 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:07.453 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:07.453 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.453 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:07.453 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:07.453 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: ]] 00:26:07.453 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:07.453 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:07.453 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.453 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.453 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:07.453 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:07.453 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.453 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:07.453 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.453 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.453 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.453 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.454 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.454 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.454 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.454 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.454 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.454 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.454 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.454 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.454 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.454 08:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.454 08:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:07.454 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.454 08:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.454 nvme0n1 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: ]] 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.454 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.712 nvme0n1 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: ]] 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.712 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.971 nvme0n1 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.971 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.230 nvme0n1 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: ]] 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.230 08:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.489 nvme0n1 00:26:08.489 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.489 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.489 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.489 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.489 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.489 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.489 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.489 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.489 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.489 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.489 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.489 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.489 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:08.489 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.489 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.489 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:08.489 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:08.489 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: ]] 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.490 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.748 nvme0n1 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: ]] 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.748 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.749 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.749 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.749 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.749 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:08.749 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.749 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.006 nvme0n1 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: ]] 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.007 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.265 nvme0n1 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.265 08:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.524 nvme0n1 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: ]] 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.524 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.783 nvme0n1 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: ]] 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.783 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.042 nvme0n1 00:26:10.042 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.042 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.042 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.042 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.042 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.042 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.042 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.042 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.042 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.042 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.042 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.042 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.042 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:10.042 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.042 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.042 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:10.042 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:10.042 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:10.042 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:10.042 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.042 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:10.042 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:10.042 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: ]] 00:26:10.042 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:10.043 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:10.043 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.043 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.043 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:10.043 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:10.043 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.043 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:10.043 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.043 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.043 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.043 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.043 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.043 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.043 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.043 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.043 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.043 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.043 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.043 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.043 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.043 08:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.043 08:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:10.043 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.043 08:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.301 nvme0n1 00:26:10.301 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.301 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.301 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.301 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.301 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: ]] 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.560 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.819 nvme0n1 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.819 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.079 nvme0n1 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: ]] 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.079 08:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.646 nvme0n1 00:26:11.646 08:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.646 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.646 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.646 08:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.646 08:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.646 08:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.646 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.646 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.646 08:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.646 08:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.646 08:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.646 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.646 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:11.646 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.646 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.646 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:11.646 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:11.646 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: ]] 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.647 08:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.905 nvme0n1 00:26:11.905 08:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.905 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.905 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.905 08:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.905 08:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.905 08:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: ]] 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.164 08:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.422 nvme0n1 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: ]] 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.422 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.988 nvme0n1 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.988 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.245 nvme0n1 00:26:13.245 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.245 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.245 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.245 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.245 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.245 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.245 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.245 08:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.245 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.245 08:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.503 08:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.503 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:13.503 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.503 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:13.503 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.503 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:13.503 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:13.503 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:13.503 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:13.503 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:13.503 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:13.503 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:13.503 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:13.503 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: ]] 00:26:13.503 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:13.503 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:13.503 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.503 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:13.503 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:13.503 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:13.504 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.504 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:13.504 08:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.504 08:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.504 08:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.504 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.504 08:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.504 08:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.504 08:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.504 08:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.504 08:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.504 08:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.504 08:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.504 08:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.504 08:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.504 08:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.504 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:13.504 08:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.504 08:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.070 nvme0n1 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: ]] 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.070 08:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.071 08:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.071 08:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.071 08:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.071 08:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.071 08:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:14.071 08:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.071 08:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.638 nvme0n1 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: ]] 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.638 08:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.207 nvme0n1 00:26:15.207 08:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.207 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.207 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.207 08:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.207 08:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.207 08:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.207 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.207 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.207 08:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.207 08:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.464 08:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.464 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.464 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:15.464 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.464 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.464 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:15.464 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:15.464 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:15.464 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:15.464 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.464 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: ]] 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.465 08:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.031 nvme0n1 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.031 08:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.598 nvme0n1 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: ]] 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.598 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.856 nvme0n1 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: ]] 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.856 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.857 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.857 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.857 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.857 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.857 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.857 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.857 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.857 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.857 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.857 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.857 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.857 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:16.857 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.857 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.115 nvme0n1 00:26:17.115 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.115 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.115 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.115 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.115 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.115 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.115 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.115 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.115 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.115 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.115 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.115 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.115 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: ]] 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.116 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.374 nvme0n1 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: ]] 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.374 08:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.374 nvme0n1 00:26:17.374 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.374 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.374 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.374 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.374 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.374 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.632 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.632 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.632 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.632 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.632 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.632 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.632 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:17.632 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.632 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.632 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:17.632 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:17.632 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:17.632 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:17.632 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.632 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:17.632 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:17.632 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:17.632 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:17.632 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.632 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.632 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:17.632 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:17.632 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.633 nvme0n1 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.633 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: ]] 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.891 nvme0n1 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.891 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: ]] 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.150 nvme0n1 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.150 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: ]] 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.408 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.409 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.409 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.409 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.409 08:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.409 08:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:18.409 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.409 08:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.409 nvme0n1 00:26:18.409 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.409 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.409 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.409 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.409 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.409 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: ]] 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.666 nvme0n1 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.666 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.924 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.924 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.924 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.924 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.924 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.924 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.925 nvme0n1 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.925 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.183 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.183 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.183 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.183 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.183 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.183 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:19.183 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: ]] 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.184 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.442 nvme0n1 00:26:19.442 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.442 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.442 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.442 08:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.442 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.442 08:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: ]] 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.442 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.443 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.443 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.443 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.443 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.443 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.443 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.443 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:19.443 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.443 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.701 nvme0n1 00:26:19.701 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.701 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.701 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.701 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.701 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.701 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.701 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.701 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.701 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.701 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: ]] 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.702 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.986 nvme0n1 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: ]] 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.986 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.244 nvme0n1 00:26:20.244 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.244 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.244 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.244 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.244 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.244 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.245 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.245 08:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.245 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.245 08:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.503 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.762 nvme0n1 00:26:20.762 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.762 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.762 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.762 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.762 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.762 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.762 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.762 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.762 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.762 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.762 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.762 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:20.762 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.762 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:20.762 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.762 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.762 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:20.762 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:20.762 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:20.762 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:20.762 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.762 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:20.762 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:20.762 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: ]] 00:26:20.762 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:20.763 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:20.763 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.763 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.763 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:20.763 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:20.763 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.763 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:20.763 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.763 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.763 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.763 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.763 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.763 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.763 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.763 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.763 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.763 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.763 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.763 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.763 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.763 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.763 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:20.763 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.763 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.021 nvme0n1 00:26:21.021 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.021 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.021 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.021 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.021 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.021 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: ]] 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.281 08:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.541 nvme0n1 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: ]] 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.541 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.109 nvme0n1 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: ]] 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.109 08:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.367 nvme0n1 00:26:22.367 08:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.367 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.367 08:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.367 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.367 08:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.367 08:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.626 08:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.885 nvme0n1 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: ]] 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.885 08:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.821 nvme0n1 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: ]] 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.821 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.822 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.822 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.822 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.822 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.822 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.822 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.822 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.822 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:23.822 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.822 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.388 nvme0n1 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: ]] 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.388 08:02:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.957 nvme0n1 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: ]] 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.957 08:02:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.524 nvme0n1 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.524 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.459 nvme0n1 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: ]] 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.459 08:02:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.459 nvme0n1 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: ]] 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.459 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.717 nvme0n1 00:26:26.717 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.717 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.717 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.717 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.717 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.717 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: ]] 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.718 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.976 nvme0n1 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: ]] 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.976 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.235 nvme0n1 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.235 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.494 nvme0n1 00:26:27.494 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.494 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.494 08:02:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.494 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.494 08:02:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: ]] 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.494 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.495 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.495 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.495 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.495 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.495 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.495 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.495 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.495 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:27.495 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.495 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.754 nvme0n1 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: ]] 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.754 nvme0n1 00:26:27.754 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: ]] 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.013 nvme0n1 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.013 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.014 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.014 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.272 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.272 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.272 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.272 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.272 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.272 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.272 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.272 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:28.272 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.272 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.272 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:28.272 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: ]] 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.273 08:02:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.273 nvme0n1 00:26:28.273 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.273 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.273 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.273 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.273 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.531 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.532 nvme0n1 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.532 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.789 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.789 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.789 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.789 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.789 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.789 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.789 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:28.789 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: ]] 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.790 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.048 nvme0n1 00:26:29.048 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.048 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.048 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.048 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.048 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.048 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.048 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.048 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.048 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.048 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.048 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.048 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.048 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:29.048 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.048 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.048 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:29.048 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:29.048 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:29.048 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:29.048 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.048 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:29.048 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: ]] 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.049 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.307 nvme0n1 00:26:29.307 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.308 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.308 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.308 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.308 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.308 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.308 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.308 08:02:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.308 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.308 08:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: ]] 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.308 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.566 nvme0n1 00:26:29.566 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.566 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.566 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.566 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.566 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.566 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.566 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.566 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.566 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.566 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: ]] 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.826 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.085 nvme0n1 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.085 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.344 nvme0n1 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: ]] 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.344 08:02:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.344 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.344 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.344 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.344 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.344 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.344 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.344 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.344 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.344 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.344 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.344 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.344 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.344 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:30.344 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.344 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.912 nvme0n1 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: ]] 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.912 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.172 nvme0n1 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: ]] 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.172 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.431 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.431 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.431 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.431 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.431 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.431 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.431 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.431 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.431 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.431 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.431 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.431 08:02:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.431 08:02:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:31.431 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.431 08:02:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.690 nvme0n1 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: ]] 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.690 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.691 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.691 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.691 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.691 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.691 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.691 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.691 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.691 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.691 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.691 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.691 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:31.691 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.691 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.258 nvme0n1 00:26:32.258 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.258 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.258 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.258 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.258 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.258 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.258 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.258 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.258 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.258 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.258 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.258 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.258 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:32.258 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.258 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.258 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:32.258 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:32.258 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.259 08:02:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.518 nvme0n1 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ1ODM1OTIwMDUwNmQxOTllZTZlMDFiNTAzODVmMTASzC+M: 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: ]] 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTU1ODYzMTVkMDFmZjVlMzEyOGZjZjhjMzljNTgyNzcxOTAxMDgwNjY4NDhmNmY1MDAyMDA4MmY1MzExY2ZmMo1tvnA=: 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.518 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.776 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.776 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.776 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.776 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.776 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.776 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.776 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.776 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.776 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.776 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.776 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.776 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.776 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:32.776 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.776 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.343 nvme0n1 00:26:33.343 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.343 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.343 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.343 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.343 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.343 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.343 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.343 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.343 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.343 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: ]] 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.344 08:02:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.948 nvme0n1 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY3NWM1ZDNjZWE5ODkxNjhjYmRmN2I3MmI2NmMzYjKDfg+N: 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: ]] 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmM0Y2U2OTI4YTVhMGNlYjNiM2NiZDgyMjJhYmNmN2OlklCq: 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.948 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:33.949 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.949 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.949 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.949 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.949 08:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.949 08:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.949 08:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.949 08:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.949 08:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.949 08:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:33.949 08:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.949 08:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:33.949 08:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:33.949 08:02:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:33.949 08:02:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:33.949 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.949 08:02:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.514 nvme0n1 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ2NzIwMzllNDI4ZTZlMGQ1NzMwNzJhZTZhOWVhMjJjODVjN2FlNzVlM2ZiZTRmLaA3Ww==: 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: ]] 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODkyNDgxYjA1NDcyNzM5MjM5YTA4OWJlMWE0MjVlMjIqmiw+: 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.514 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.079 nvme0n1 00:26:35.079 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.079 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.079 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.079 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.079 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.079 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.337 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.337 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.337 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.337 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.337 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.337 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.337 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:35.337 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.337 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:35.337 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:35.337 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:35.337 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:35.337 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:35.337 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:35.337 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:35.337 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTdlZWViMTc4YjlmMDFlNmE4NTQxOTkyMzkwYzJkZWM3YThmNzE5NmIzNDMzYjMyYWJjZjNmZTIyMzViZjE1YSsKQJY=: 00:26:35.337 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:35.337 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:35.337 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.337 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:35.337 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:35.337 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:35.337 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.338 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:35.338 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.338 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.338 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.338 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.338 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.338 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.338 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.338 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.338 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.338 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:35.338 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.338 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:35.338 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:35.338 08:02:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:35.338 08:02:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:35.338 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.338 08:02:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.904 nvme0n1 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTBhYTQ3MjE2ZDQ1YWIwYzJkNDMyNDA0MTc4MzUzMDY1YWE5ODRiYWU3ODc5ZjEwunCA+A==: 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: ]] 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhmNThhNWVhMjk3NTFjZDJkM2Y1NGVjNjk3NThkNTdkNjI5ZjZmNzBhZWEyOTJjs59erw==: 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.904 request: 00:26:35.904 { 00:26:35.904 "name": "nvme0", 00:26:35.904 "trtype": "tcp", 00:26:35.904 "traddr": "10.0.0.1", 00:26:35.904 "adrfam": "ipv4", 00:26:35.904 "trsvcid": "4420", 00:26:35.904 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:35.904 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:35.904 "prchk_reftag": false, 00:26:35.904 "prchk_guard": false, 00:26:35.904 "hdgst": false, 00:26:35.904 "ddgst": false, 00:26:35.904 "method": "bdev_nvme_attach_controller", 00:26:35.904 "req_id": 1 00:26:35.904 } 00:26:35.904 Got JSON-RPC error response 00:26:35.904 response: 00:26:35.904 { 00:26:35.904 "code": -5, 00:26:35.904 "message": "Input/output error" 00:26:35.904 } 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:35.904 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.905 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.164 request: 00:26:36.164 { 00:26:36.164 "name": "nvme0", 00:26:36.164 "trtype": "tcp", 00:26:36.164 "traddr": "10.0.0.1", 00:26:36.164 "adrfam": "ipv4", 00:26:36.164 "trsvcid": "4420", 00:26:36.164 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:36.164 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:36.164 "prchk_reftag": false, 00:26:36.164 "prchk_guard": false, 00:26:36.164 "hdgst": false, 00:26:36.164 "ddgst": false, 00:26:36.164 "dhchap_key": "key2", 00:26:36.164 "method": "bdev_nvme_attach_controller", 00:26:36.164 "req_id": 1 00:26:36.164 } 00:26:36.164 Got JSON-RPC error response 00:26:36.164 response: 00:26:36.164 { 00:26:36.164 "code": -5, 00:26:36.164 "message": "Input/output error" 00:26:36.164 } 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.164 request: 00:26:36.164 { 00:26:36.164 "name": "nvme0", 00:26:36.164 "trtype": "tcp", 00:26:36.164 "traddr": "10.0.0.1", 00:26:36.164 "adrfam": "ipv4", 00:26:36.164 "trsvcid": "4420", 00:26:36.164 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:36.164 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:36.164 "prchk_reftag": false, 00:26:36.164 "prchk_guard": false, 00:26:36.164 "hdgst": false, 00:26:36.164 "ddgst": false, 00:26:36.164 "dhchap_key": "key1", 00:26:36.164 "dhchap_ctrlr_key": "ckey2", 00:26:36.164 "method": "bdev_nvme_attach_controller", 00:26:36.164 "req_id": 1 00:26:36.164 } 00:26:36.164 Got JSON-RPC error response 00:26:36.164 response: 00:26:36.164 { 00:26:36.164 "code": -5, 00:26:36.164 "message": "Input/output error" 00:26:36.164 } 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:36.164 rmmod nvme_tcp 00:26:36.164 rmmod nvme_fabrics 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3384803 ']' 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3384803 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 3384803 ']' 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 3384803 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3384803 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3384803' 00:26:36.164 killing process with pid 3384803 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 3384803 00:26:36.164 08:02:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 3384803 00:26:36.422 08:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:36.422 08:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:36.422 08:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:36.422 08:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:36.422 08:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:36.422 08:02:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.422 08:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:36.422 08:02:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.950 08:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:38.950 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:38.950 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:38.950 08:02:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:38.950 08:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:38.950 08:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:38.950 08:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:38.950 08:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:38.950 08:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:38.950 08:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:38.950 08:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:38.950 08:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:38.950 08:02:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:41.482 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:41.482 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:41.482 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:41.482 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:41.482 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:41.482 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:41.482 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:41.482 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:41.482 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:41.482 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:41.482 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:41.482 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:41.482 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:41.482 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:41.482 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:41.482 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:42.420 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:42.420 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.0PZ /tmp/spdk.key-null.Evw /tmp/spdk.key-sha256.I8H /tmp/spdk.key-sha384.A9s /tmp/spdk.key-sha512.cZi /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:42.420 08:02:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:44.955 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:44.955 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:44.955 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:44.955 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:44.955 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:44.955 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:44.955 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:44.955 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:44.955 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:44.955 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:44.955 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:44.955 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:44.955 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:44.955 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:44.955 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:44.955 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:44.955 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:45.214 00:26:45.214 real 0m50.415s 00:26:45.214 user 0m45.157s 00:26:45.214 sys 0m12.277s 00:26:45.214 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:45.214 08:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.214 ************************************ 00:26:45.214 END TEST nvmf_auth_host 00:26:45.214 ************************************ 00:26:45.214 08:02:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:45.214 08:02:29 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:26:45.214 08:02:29 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:45.214 08:02:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:45.214 08:02:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:45.214 08:02:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:45.214 ************************************ 00:26:45.214 START TEST nvmf_digest 00:26:45.214 ************************************ 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:45.214 * Looking for test storage... 00:26:45.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:45.214 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:45.215 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:45.215 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:45.215 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:45.474 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:45.474 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:45.474 08:02:29 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:45.474 08:02:29 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:45.474 08:02:29 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:45.474 08:02:29 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:45.474 08:02:29 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:45.474 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:45.474 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:45.474 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:45.474 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:45.474 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:45.474 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.474 08:02:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:45.474 08:02:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.474 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:45.474 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:45.474 08:02:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:26:45.474 08:02:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:50.745 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:50.745 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:26:50.745 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:50.745 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:50.745 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:50.745 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:50.745 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:50.745 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:26:50.745 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:50.745 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:26:50.745 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:26:50.745 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:26:50.745 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:26:50.745 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:26:50.745 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:26:50.745 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:50.746 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:50.746 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:50.746 Found net devices under 0000:86:00.0: cvl_0_0 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:50.746 Found net devices under 0000:86:00.1: cvl_0_1 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:50.746 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:51.005 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:51.005 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:51.005 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:51.005 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:51.005 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:51.005 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:51.005 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:51.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:51.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:26:51.005 00:26:51.005 --- 10.0.0.2 ping statistics --- 00:26:51.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.005 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:26:51.005 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:51.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:51.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:26:51.005 00:26:51.005 --- 10.0.0.1 ping statistics --- 00:26:51.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.005 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:26:51.005 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:51.005 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:26:51.005 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:51.005 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:51.005 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:51.005 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:51.005 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:51.005 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:51.005 08:02:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:51.005 08:02:35 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:51.005 08:02:35 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:51.005 08:02:35 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:51.005 08:02:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:51.005 08:02:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:51.005 08:02:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:51.264 ************************************ 00:26:51.264 START TEST nvmf_digest_clean 00:26:51.264 ************************************ 00:26:51.264 08:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:26:51.264 08:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:51.264 08:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:51.264 08:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:51.264 08:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:51.264 08:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:51.264 08:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:51.264 08:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:51.264 08:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:51.264 08:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3398049 00:26:51.264 08:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3398049 00:26:51.264 08:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:51.264 08:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3398049 ']' 00:26:51.264 08:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.264 08:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:51.264 08:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.264 08:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:51.264 08:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:51.264 [2024-07-15 08:02:35.817660] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:26:51.264 [2024-07-15 08:02:35.817700] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.264 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.264 [2024-07-15 08:02:35.886915] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.264 [2024-07-15 08:02:35.964556] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.264 [2024-07-15 08:02:35.964590] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.264 [2024-07-15 08:02:35.964598] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:51.264 [2024-07-15 08:02:35.964604] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:51.264 [2024-07-15 08:02:35.964609] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.264 [2024-07-15 08:02:35.964625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.200 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:52.200 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:52.200 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:52.200 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:52.201 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:52.201 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:52.201 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:52.201 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:52.201 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:52.201 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.201 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:52.201 null0 00:26:52.201 [2024-07-15 08:02:36.751792] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.201 [2024-07-15 08:02:36.775978] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.201 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.201 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:52.201 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:52.201 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:52.201 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:52.201 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:52.201 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:52.201 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:52.201 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3398211 00:26:52.201 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3398211 /var/tmp/bperf.sock 00:26:52.201 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:52.201 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3398211 ']' 00:26:52.201 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:52.201 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:52.201 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:52.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:52.201 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:52.201 08:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:52.201 [2024-07-15 08:02:36.827751] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:26:52.201 [2024-07-15 08:02:36.827791] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3398211 ] 00:26:52.201 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.201 [2024-07-15 08:02:36.895476] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.460 [2024-07-15 08:02:36.968390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.027 08:02:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:53.027 08:02:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:53.027 08:02:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:53.027 08:02:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:53.027 08:02:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:53.285 08:02:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:53.285 08:02:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:53.543 nvme0n1 00:26:53.543 08:02:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:53.543 08:02:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:53.543 Running I/O for 2 seconds... 00:26:55.481 00:26:55.481 Latency(us) 00:26:55.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.481 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:55.481 nvme0n1 : 2.00 24891.55 97.23 0.00 0.00 5137.98 2350.75 11511.54 00:26:55.482 =================================================================================================================== 00:26:55.482 Total : 24891.55 97.23 0.00 0.00 5137.98 2350.75 11511.54 00:26:55.740 0 00:26:55.740 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:55.740 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:55.740 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:55.740 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:55.740 | select(.opcode=="crc32c") 00:26:55.740 | "\(.module_name) \(.executed)"' 00:26:55.740 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:55.740 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:55.740 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:55.740 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:55.740 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:55.740 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3398211 00:26:55.740 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3398211 ']' 00:26:55.740 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3398211 00:26:55.741 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:55.741 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:55.741 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3398211 00:26:55.741 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:55.741 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:55.741 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3398211' 00:26:55.741 killing process with pid 3398211 00:26:55.741 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3398211 00:26:55.741 Received shutdown signal, test time was about 2.000000 seconds 00:26:55.741 00:26:55.741 Latency(us) 00:26:55.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.741 =================================================================================================================== 00:26:55.741 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:55.741 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3398211 00:26:56.000 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:56.000 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:56.000 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:56.000 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:56.000 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:56.000 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:56.000 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:56.000 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3398902 00:26:56.000 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3398902 /var/tmp/bperf.sock 00:26:56.000 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:56.000 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3398902 ']' 00:26:56.000 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:56.000 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:56.000 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:56.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:56.000 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:56.000 08:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:56.000 [2024-07-15 08:02:40.706781] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:26:56.000 [2024-07-15 08:02:40.706832] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3398902 ] 00:26:56.000 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:56.000 Zero copy mechanism will not be used. 00:26:56.000 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.259 [2024-07-15 08:02:40.774609] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.259 [2024-07-15 08:02:40.846930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.826 08:02:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:56.826 08:02:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:56.826 08:02:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:56.826 08:02:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:56.826 08:02:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:57.085 08:02:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:57.085 08:02:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:57.651 nvme0n1 00:26:57.651 08:02:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:57.651 08:02:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:57.651 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:57.651 Zero copy mechanism will not be used. 00:26:57.651 Running I/O for 2 seconds... 00:26:59.554 00:26:59.554 Latency(us) 00:26:59.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.554 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:59.554 nvme0n1 : 2.00 5329.75 666.22 0.00 0.00 2999.29 755.09 8092.27 00:26:59.554 =================================================================================================================== 00:26:59.554 Total : 5329.75 666.22 0.00 0.00 2999.29 755.09 8092.27 00:26:59.554 0 00:26:59.554 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:59.554 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:59.554 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:59.554 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:59.554 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:59.554 | select(.opcode=="crc32c") 00:26:59.554 | "\(.module_name) \(.executed)"' 00:26:59.812 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:59.812 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:59.812 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:59.812 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:59.812 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3398902 00:26:59.812 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3398902 ']' 00:26:59.812 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3398902 00:26:59.812 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:59.812 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:59.812 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3398902 00:26:59.812 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:59.812 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:59.812 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3398902' 00:26:59.812 killing process with pid 3398902 00:26:59.812 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3398902 00:26:59.812 Received shutdown signal, test time was about 2.000000 seconds 00:26:59.812 00:26:59.812 Latency(us) 00:26:59.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.812 =================================================================================================================== 00:26:59.812 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:59.812 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3398902 00:27:00.071 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:00.071 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:00.071 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:00.071 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:00.071 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:00.071 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:00.071 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:00.071 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3399598 00:27:00.071 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3399598 /var/tmp/bperf.sock 00:27:00.071 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:00.071 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3399598 ']' 00:27:00.071 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:00.071 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:00.071 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:00.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:00.071 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:00.071 08:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:00.071 [2024-07-15 08:02:44.667910] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:27:00.071 [2024-07-15 08:02:44.667957] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3399598 ] 00:27:00.071 EAL: No free 2048 kB hugepages reported on node 1 00:27:00.071 [2024-07-15 08:02:44.736536] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.071 [2024-07-15 08:02:44.815647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.007 08:02:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:01.007 08:02:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:27:01.007 08:02:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:01.007 08:02:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:01.007 08:02:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:01.007 08:02:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:01.007 08:02:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:01.266 nvme0n1 00:27:01.266 08:02:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:01.266 08:02:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:01.523 Running I/O for 2 seconds... 00:27:03.424 00:27:03.424 Latency(us) 00:27:03.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.424 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:03.424 nvme0n1 : 2.00 27183.16 106.18 0.00 0.00 4700.22 4388.06 9801.91 00:27:03.424 =================================================================================================================== 00:27:03.424 Total : 27183.16 106.18 0.00 0.00 4700.22 4388.06 9801.91 00:27:03.424 0 00:27:03.424 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:03.424 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:03.424 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:03.424 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:03.424 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:03.424 | select(.opcode=="crc32c") 00:27:03.424 | "\(.module_name) \(.executed)"' 00:27:03.683 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:03.683 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:03.683 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:03.683 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:03.683 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3399598 00:27:03.683 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3399598 ']' 00:27:03.683 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3399598 00:27:03.683 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:27:03.683 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:03.683 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3399598 00:27:03.683 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:03.683 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:03.683 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3399598' 00:27:03.683 killing process with pid 3399598 00:27:03.683 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3399598 00:27:03.683 Received shutdown signal, test time was about 2.000000 seconds 00:27:03.683 00:27:03.683 Latency(us) 00:27:03.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.683 =================================================================================================================== 00:27:03.683 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:03.683 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3399598 00:27:03.942 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:03.943 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:03.943 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:03.943 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:03.943 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:03.943 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:03.943 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:03.943 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3400117 00:27:03.943 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3400117 /var/tmp/bperf.sock 00:27:03.943 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:03.943 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3400117 ']' 00:27:03.943 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:03.943 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:03.943 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:03.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:03.943 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:03.943 08:02:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:03.943 [2024-07-15 08:02:48.578860] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:27:03.943 [2024-07-15 08:02:48.578909] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3400117 ] 00:27:03.943 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:03.943 Zero copy mechanism will not be used. 00:27:03.943 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.943 [2024-07-15 08:02:48.648733] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.202 [2024-07-15 08:02:48.719863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.769 08:02:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:04.769 08:02:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:27:04.769 08:02:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:04.769 08:02:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:04.769 08:02:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:05.028 08:02:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:05.028 08:02:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:05.287 nvme0n1 00:27:05.546 08:02:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:05.546 08:02:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:05.546 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:05.546 Zero copy mechanism will not be used. 00:27:05.546 Running I/O for 2 seconds... 00:27:07.448 00:27:07.448 Latency(us) 00:27:07.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:07.448 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:07.448 nvme0n1 : 2.00 6058.10 757.26 0.00 0.00 2636.92 2094.30 11340.58 00:27:07.448 =================================================================================================================== 00:27:07.448 Total : 6058.10 757.26 0.00 0.00 2636.92 2094.30 11340.58 00:27:07.448 0 00:27:07.448 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:07.448 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:07.448 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:07.448 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:07.448 | select(.opcode=="crc32c") 00:27:07.448 | "\(.module_name) \(.executed)"' 00:27:07.448 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:07.706 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:07.706 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:07.706 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:07.706 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:07.706 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3400117 00:27:07.706 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3400117 ']' 00:27:07.706 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3400117 00:27:07.706 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:27:07.706 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:07.706 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3400117 00:27:07.706 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:07.706 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:07.706 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3400117' 00:27:07.706 killing process with pid 3400117 00:27:07.706 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3400117 00:27:07.706 Received shutdown signal, test time was about 2.000000 seconds 00:27:07.706 00:27:07.706 Latency(us) 00:27:07.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:07.706 =================================================================================================================== 00:27:07.707 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:07.707 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3400117 00:27:07.965 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3398049 00:27:07.965 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3398049 ']' 00:27:07.965 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3398049 00:27:07.965 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:27:07.965 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:07.965 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3398049 00:27:07.965 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:07.965 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:07.965 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3398049' 00:27:07.965 killing process with pid 3398049 00:27:07.965 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3398049 00:27:07.965 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3398049 00:27:08.224 00:27:08.224 real 0m17.034s 00:27:08.224 user 0m32.673s 00:27:08.224 sys 0m4.523s 00:27:08.224 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:08.224 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:08.224 ************************************ 00:27:08.224 END TEST nvmf_digest_clean 00:27:08.224 ************************************ 00:27:08.224 08:02:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:27:08.224 08:02:52 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:08.224 08:02:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:08.224 08:02:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:08.224 08:02:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:08.224 ************************************ 00:27:08.224 START TEST nvmf_digest_error 00:27:08.224 ************************************ 00:27:08.224 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:27:08.224 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:08.224 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:08.224 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:08.224 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:08.224 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3400850 00:27:08.224 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3400850 00:27:08.224 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:08.224 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3400850 ']' 00:27:08.224 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.224 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:08.224 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.224 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:08.224 08:02:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:08.224 [2024-07-15 08:02:52.920418] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:27:08.224 [2024-07-15 08:02:52.920460] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:08.224 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.482 [2024-07-15 08:02:52.988365] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.482 [2024-07-15 08:02:53.067113] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:08.482 [2024-07-15 08:02:53.067150] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:08.482 [2024-07-15 08:02:53.067157] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:08.482 [2024-07-15 08:02:53.067163] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:08.482 [2024-07-15 08:02:53.067168] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:08.482 [2024-07-15 08:02:53.067185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.048 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:09.048 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:09.048 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:09.048 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:09.048 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:09.048 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:09.048 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:09.048 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.048 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:09.048 [2024-07-15 08:02:53.765221] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:09.048 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.048 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:09.048 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:09.048 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.048 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:09.306 null0 00:27:09.306 [2024-07-15 08:02:53.858600] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:09.306 [2024-07-15 08:02:53.882763] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:09.306 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.306 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:09.306 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:09.307 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:09.307 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:09.307 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:09.307 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3401051 00:27:09.307 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3401051 /var/tmp/bperf.sock 00:27:09.307 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:09.307 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3401051 ']' 00:27:09.307 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:09.307 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:09.307 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:09.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:09.307 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:09.307 08:02:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:09.307 [2024-07-15 08:02:53.930642] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:27:09.307 [2024-07-15 08:02:53.930682] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3401051 ] 00:27:09.307 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.307 [2024-07-15 08:02:53.997739] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.564 [2024-07-15 08:02:54.070349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.130 08:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:10.130 08:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:10.130 08:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:10.130 08:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:10.389 08:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:10.389 08:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.389 08:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:10.389 08:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.389 08:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:10.389 08:02:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:10.648 nvme0n1 00:27:10.648 08:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:10.648 08:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.648 08:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:10.648 08:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.648 08:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:10.648 08:02:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:10.648 Running I/O for 2 seconds... 00:27:10.916 [2024-07-15 08:02:55.409802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:10.916 [2024-07-15 08:02:55.409835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.916 [2024-07-15 08:02:55.409846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.916 [2024-07-15 08:02:55.418419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:10.916 [2024-07-15 08:02:55.418444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.916 [2024-07-15 08:02:55.418453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.916 [2024-07-15 08:02:55.431003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:10.916 [2024-07-15 08:02:55.431026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.917 [2024-07-15 08:02:55.431034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.917 [2024-07-15 08:02:55.439618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:10.917 [2024-07-15 08:02:55.439639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.917 [2024-07-15 08:02:55.439647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.917 [2024-07-15 08:02:55.451779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:10.917 [2024-07-15 08:02:55.451800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.917 [2024-07-15 08:02:55.451809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.917 [2024-07-15 08:02:55.462404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:10.917 [2024-07-15 08:02:55.462425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.917 [2024-07-15 08:02:55.462433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.917 [2024-07-15 08:02:55.474608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:10.917 [2024-07-15 08:02:55.474627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.917 [2024-07-15 08:02:55.474635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.917 [2024-07-15 08:02:55.484751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:10.917 [2024-07-15 08:02:55.484773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.917 [2024-07-15 08:02:55.484781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.917 [2024-07-15 08:02:55.493730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:10.917 [2024-07-15 08:02:55.493750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.917 [2024-07-15 08:02:55.493759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.917 [2024-07-15 08:02:55.505306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:10.917 [2024-07-15 08:02:55.505329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.917 [2024-07-15 08:02:55.505338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.917 [2024-07-15 08:02:55.513955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:10.917 [2024-07-15 08:02:55.513975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.917 [2024-07-15 08:02:55.513984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.917 [2024-07-15 08:02:55.525706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:10.917 [2024-07-15 08:02:55.525726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.917 [2024-07-15 08:02:55.525734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.917 [2024-07-15 08:02:55.537244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:10.917 [2024-07-15 08:02:55.537265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.917 [2024-07-15 08:02:55.537272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.917 [2024-07-15 08:02:55.546057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:10.917 [2024-07-15 08:02:55.546076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.917 [2024-07-15 08:02:55.546084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.917 [2024-07-15 08:02:55.557863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:10.917 [2024-07-15 08:02:55.557884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.917 [2024-07-15 08:02:55.557892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.917 [2024-07-15 08:02:55.571000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:10.917 [2024-07-15 08:02:55.571018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.917 [2024-07-15 08:02:55.571025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.917 [2024-07-15 08:02:55.579501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:10.917 [2024-07-15 08:02:55.579520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.917 [2024-07-15 08:02:55.579529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.917 [2024-07-15 08:02:55.589840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:10.917 [2024-07-15 08:02:55.589861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.917 [2024-07-15 08:02:55.589869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.917 [2024-07-15 08:02:55.598255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:10.917 [2024-07-15 08:02:55.598276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.917 [2024-07-15 08:02:55.598284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.917 [2024-07-15 08:02:55.609918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:10.917 [2024-07-15 08:02:55.609937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.917 [2024-07-15 08:02:55.609945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.917 [2024-07-15 08:02:55.622100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:10.917 [2024-07-15 08:02:55.622121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.917 [2024-07-15 08:02:55.622129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.917 [2024-07-15 08:02:55.633991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:10.917 [2024-07-15 08:02:55.634011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.917 [2024-07-15 08:02:55.634019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.917 [2024-07-15 08:02:55.642884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:10.917 [2024-07-15 08:02:55.642906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.917 [2024-07-15 08:02:55.642914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.917 [2024-07-15 08:02:55.653746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:10.917 [2024-07-15 08:02:55.653768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.917 [2024-07-15 08:02:55.653776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.242 [2024-07-15 08:02:55.664548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.242 [2024-07-15 08:02:55.664576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.242 [2024-07-15 08:02:55.664586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.242 [2024-07-15 08:02:55.672794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.242 [2024-07-15 08:02:55.672816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.672825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.685164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.685185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.685194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.696245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.696266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.696275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.705004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.705023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.705031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.716716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.716737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.716745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.728330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.728351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.728359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.737341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.737360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.737368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.749948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.749968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.749976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.763014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.763034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.763042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.775317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.775337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.775346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.783513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.783533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.783541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.794383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.794404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.794412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.803948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.803968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.803976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.812530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.812550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.812558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.821670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.821690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.821699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.832480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.832500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.832508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.841178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.841199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.841210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.853590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.853610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.853618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.866192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.866214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.866221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.875818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.875837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.875845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.885407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.885427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.885435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.897099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.897119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.897127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.908344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.908364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.908372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.916827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.916848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.916856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.928470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.928490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.928498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.937793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.937820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.937827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.948501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.948522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.948530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.960394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.960414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.960422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.969611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.243 [2024-07-15 08:02:55.969631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.243 [2024-07-15 08:02:55.969639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.243 [2024-07-15 08:02:55.980043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.244 [2024-07-15 08:02:55.980064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.244 [2024-07-15 08:02:55.980072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.244 [2024-07-15 08:02:55.990365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.244 [2024-07-15 08:02:55.990385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.244 [2024-07-15 08:02:55.990393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.502 [2024-07-15 08:02:55.999213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.502 [2024-07-15 08:02:55.999239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.502 [2024-07-15 08:02:55.999248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.502 [2024-07-15 08:02:56.008867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.502 [2024-07-15 08:02:56.008887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.502 [2024-07-15 08:02:56.008896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.502 [2024-07-15 08:02:56.019646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.502 [2024-07-15 08:02:56.019666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.502 [2024-07-15 08:02:56.019675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.502 [2024-07-15 08:02:56.030451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.502 [2024-07-15 08:02:56.030471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.502 [2024-07-15 08:02:56.030479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.502 [2024-07-15 08:02:56.038948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.502 [2024-07-15 08:02:56.038969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.502 [2024-07-15 08:02:56.038977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.502 [2024-07-15 08:02:56.050408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.502 [2024-07-15 08:02:56.050429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.502 [2024-07-15 08:02:56.050437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.502 [2024-07-15 08:02:56.059104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.502 [2024-07-15 08:02:56.059123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.502 [2024-07-15 08:02:56.059131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.502 [2024-07-15 08:02:56.069823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.502 [2024-07-15 08:02:56.069842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.502 [2024-07-15 08:02:56.069850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.502 [2024-07-15 08:02:56.078045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.503 [2024-07-15 08:02:56.078064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.503 [2024-07-15 08:02:56.078071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.503 [2024-07-15 08:02:56.090489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.503 [2024-07-15 08:02:56.090508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.503 [2024-07-15 08:02:56.090516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.503 [2024-07-15 08:02:56.098641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.503 [2024-07-15 08:02:56.098660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.503 [2024-07-15 08:02:56.098667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.503 [2024-07-15 08:02:56.110553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.503 [2024-07-15 08:02:56.110572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.503 [2024-07-15 08:02:56.110584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.503 [2024-07-15 08:02:56.118861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.503 [2024-07-15 08:02:56.118882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.503 [2024-07-15 08:02:56.118890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.503 [2024-07-15 08:02:56.129928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.503 [2024-07-15 08:02:56.129949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.503 [2024-07-15 08:02:56.129957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.503 [2024-07-15 08:02:56.140307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.503 [2024-07-15 08:02:56.140327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.503 [2024-07-15 08:02:56.140336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.503 [2024-07-15 08:02:56.150094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.503 [2024-07-15 08:02:56.150112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.503 [2024-07-15 08:02:56.150120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.503 [2024-07-15 08:02:56.158630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.503 [2024-07-15 08:02:56.158649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.503 [2024-07-15 08:02:56.158657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.503 [2024-07-15 08:02:56.170309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.503 [2024-07-15 08:02:56.170329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.503 [2024-07-15 08:02:56.170337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.503 [2024-07-15 08:02:56.178360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.503 [2024-07-15 08:02:56.178379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.503 [2024-07-15 08:02:56.178387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.503 [2024-07-15 08:02:56.188192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.503 [2024-07-15 08:02:56.188211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.503 [2024-07-15 08:02:56.188218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.503 [2024-07-15 08:02:56.198451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.503 [2024-07-15 08:02:56.198470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.503 [2024-07-15 08:02:56.198477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.503 [2024-07-15 08:02:56.206532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.503 [2024-07-15 08:02:56.206552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.503 [2024-07-15 08:02:56.206560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.503 [2024-07-15 08:02:56.217216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.503 [2024-07-15 08:02:56.217243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.503 [2024-07-15 08:02:56.217253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.503 [2024-07-15 08:02:56.227265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.503 [2024-07-15 08:02:56.227284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.503 [2024-07-15 08:02:56.227292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.503 [2024-07-15 08:02:56.236301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.503 [2024-07-15 08:02:56.236320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.503 [2024-07-15 08:02:56.236327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.503 [2024-07-15 08:02:56.248272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.503 [2024-07-15 08:02:56.248292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.503 [2024-07-15 08:02:56.248301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.763 [2024-07-15 08:02:56.260786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.763 [2024-07-15 08:02:56.260807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.763 [2024-07-15 08:02:56.260815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.763 [2024-07-15 08:02:56.272456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.764 [2024-07-15 08:02:56.272475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.764 [2024-07-15 08:02:56.272484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.764 [2024-07-15 08:02:56.280875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.764 [2024-07-15 08:02:56.280895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.764 [2024-07-15 08:02:56.280906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.764 [2024-07-15 08:02:56.293295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.764 [2024-07-15 08:02:56.293315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.764 [2024-07-15 08:02:56.293323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.764 [2024-07-15 08:02:56.301921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.764 [2024-07-15 08:02:56.301940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.764 [2024-07-15 08:02:56.301948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.764 [2024-07-15 08:02:56.313966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.764 [2024-07-15 08:02:56.313985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.764 [2024-07-15 08:02:56.313993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.764 [2024-07-15 08:02:56.322943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.764 [2024-07-15 08:02:56.322962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.764 [2024-07-15 08:02:56.322970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.764 [2024-07-15 08:02:56.332946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.764 [2024-07-15 08:02:56.332965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.764 [2024-07-15 08:02:56.332973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.764 [2024-07-15 08:02:56.341460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.764 [2024-07-15 08:02:56.341479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.764 [2024-07-15 08:02:56.341487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.764 [2024-07-15 08:02:56.350886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.764 [2024-07-15 08:02:56.350905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.764 [2024-07-15 08:02:56.350912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.764 [2024-07-15 08:02:56.360137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.764 [2024-07-15 08:02:56.360156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.764 [2024-07-15 08:02:56.360163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.764 [2024-07-15 08:02:56.370459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.764 [2024-07-15 08:02:56.370481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.764 [2024-07-15 08:02:56.370489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.764 [2024-07-15 08:02:56.378152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.764 [2024-07-15 08:02:56.378171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.764 [2024-07-15 08:02:56.378178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.764 [2024-07-15 08:02:56.390038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.764 [2024-07-15 08:02:56.390058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.764 [2024-07-15 08:02:56.390065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.764 [2024-07-15 08:02:56.402362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.764 [2024-07-15 08:02:56.402382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.764 [2024-07-15 08:02:56.402390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.764 [2024-07-15 08:02:56.415158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.764 [2024-07-15 08:02:56.415178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.764 [2024-07-15 08:02:56.415186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.764 [2024-07-15 08:02:56.423867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.764 [2024-07-15 08:02:56.423887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.764 [2024-07-15 08:02:56.423895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.764 [2024-07-15 08:02:56.435592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.764 [2024-07-15 08:02:56.435612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.764 [2024-07-15 08:02:56.435620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.764 [2024-07-15 08:02:56.448034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.764 [2024-07-15 08:02:56.448054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.764 [2024-07-15 08:02:56.448062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.764 [2024-07-15 08:02:56.460645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.764 [2024-07-15 08:02:56.460665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.764 [2024-07-15 08:02:56.460673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.764 [2024-07-15 08:02:56.468878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.764 [2024-07-15 08:02:56.468897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.764 [2024-07-15 08:02:56.468905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.764 [2024-07-15 08:02:56.481328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.764 [2024-07-15 08:02:56.481348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.764 [2024-07-15 08:02:56.481356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.764 [2024-07-15 08:02:56.492853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.764 [2024-07-15 08:02:56.492872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.764 [2024-07-15 08:02:56.492880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.764 [2024-07-15 08:02:56.501768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.764 [2024-07-15 08:02:56.501786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.764 [2024-07-15 08:02:56.501794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.764 [2024-07-15 08:02:56.513543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:11.765 [2024-07-15 08:02:56.513563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.765 [2024-07-15 08:02:56.513571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.025 [2024-07-15 08:02:56.525888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.025 [2024-07-15 08:02:56.525908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.025 [2024-07-15 08:02:56.525916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.025 [2024-07-15 08:02:56.539529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.025 [2024-07-15 08:02:56.539547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.025 [2024-07-15 08:02:56.539556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.025 [2024-07-15 08:02:56.547564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.025 [2024-07-15 08:02:56.547583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.025 [2024-07-15 08:02:56.547591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.025 [2024-07-15 08:02:56.559984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.025 [2024-07-15 08:02:56.560003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.025 [2024-07-15 08:02:56.560014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.025 [2024-07-15 08:02:56.572088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.025 [2024-07-15 08:02:56.572107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.025 [2024-07-15 08:02:56.572115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.025 [2024-07-15 08:02:56.583149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.025 [2024-07-15 08:02:56.583168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.025 [2024-07-15 08:02:56.583177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.025 [2024-07-15 08:02:56.592322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.025 [2024-07-15 08:02:56.592341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.025 [2024-07-15 08:02:56.592349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.025 [2024-07-15 08:02:56.601775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.025 [2024-07-15 08:02:56.601796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.025 [2024-07-15 08:02:56.601804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.025 [2024-07-15 08:02:56.610118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.025 [2024-07-15 08:02:56.610137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.025 [2024-07-15 08:02:56.610145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.025 [2024-07-15 08:02:56.620802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.025 [2024-07-15 08:02:56.620823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.025 [2024-07-15 08:02:56.620831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.025 [2024-07-15 08:02:56.630590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.025 [2024-07-15 08:02:56.630610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.025 [2024-07-15 08:02:56.630618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.025 [2024-07-15 08:02:56.639950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.025 [2024-07-15 08:02:56.639970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.025 [2024-07-15 08:02:56.639978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.025 [2024-07-15 08:02:56.648952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.025 [2024-07-15 08:02:56.648971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.025 [2024-07-15 08:02:56.648979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.025 [2024-07-15 08:02:56.659900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.025 [2024-07-15 08:02:56.659920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.025 [2024-07-15 08:02:56.659927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.025 [2024-07-15 08:02:56.668425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.025 [2024-07-15 08:02:56.668444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.025 [2024-07-15 08:02:56.668452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.025 [2024-07-15 08:02:56.679622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.025 [2024-07-15 08:02:56.679642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.025 [2024-07-15 08:02:56.679650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.025 [2024-07-15 08:02:56.689789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.025 [2024-07-15 08:02:56.689809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.025 [2024-07-15 08:02:56.689816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.025 [2024-07-15 08:02:56.698438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.025 [2024-07-15 08:02:56.698457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.025 [2024-07-15 08:02:56.698465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.025 [2024-07-15 08:02:56.710596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.025 [2024-07-15 08:02:56.710616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.025 [2024-07-15 08:02:56.710624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.025 [2024-07-15 08:02:56.719977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.025 [2024-07-15 08:02:56.719996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.025 [2024-07-15 08:02:56.720003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.025 [2024-07-15 08:02:56.728833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.025 [2024-07-15 08:02:56.728853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.025 [2024-07-15 08:02:56.728864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.025 [2024-07-15 08:02:56.738508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.025 [2024-07-15 08:02:56.738528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.025 [2024-07-15 08:02:56.738536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.025 [2024-07-15 08:02:56.747463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.025 [2024-07-15 08:02:56.747482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.025 [2024-07-15 08:02:56.747490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.025 [2024-07-15 08:02:56.757880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.025 [2024-07-15 08:02:56.757899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.025 [2024-07-15 08:02:56.757906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.025 [2024-07-15 08:02:56.766401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.025 [2024-07-15 08:02:56.766432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.025 [2024-07-15 08:02:56.766440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:56.779773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:56.779794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:56.779802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:56.792124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:56.792145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:56.792153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:56.800246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:56.800266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:56.800274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:56.811133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:56.811157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:56.811166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:56.822571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:56.822594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:56.822602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:56.830501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:56.830520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:56.830527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:56.840107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:56.840127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:56.840134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:56.849669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:56.849688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:56.849697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:56.858624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:56.858644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:56.858651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:56.868988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:56.869008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:56.869015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:56.878118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:56.878137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:56.878145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:56.887694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:56.887714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:56.887721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:56.896007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:56.896027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:56.896035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:56.905923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:56.905942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:56.905950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:56.916488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:56.916508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:56.916516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:56.924973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:56.924992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:56.925000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:56.933957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:56.933976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:56.933984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:56.944721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:56.944741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:56.944749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:56.956916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:56.956936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:56.956944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:56.965496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:56.965516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:56.965523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:56.977548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:56.977567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:56.977575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:56.986125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:56.986145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:56.986156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:56.997727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:56.997746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:56.997753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:57.010141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:57.010160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:57.010168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:57.021514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:57.021534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:57.021542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.284 [2024-07-15 08:02:57.030279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.284 [2024-07-15 08:02:57.030298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.284 [2024-07-15 08:02:57.030306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.543 [2024-07-15 08:02:57.042872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.543 [2024-07-15 08:02:57.042892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.543 [2024-07-15 08:02:57.042900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.543 [2024-07-15 08:02:57.055730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.543 [2024-07-15 08:02:57.055752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.543 [2024-07-15 08:02:57.055760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.543 [2024-07-15 08:02:57.064373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.543 [2024-07-15 08:02:57.064392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.543 [2024-07-15 08:02:57.064401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.543 [2024-07-15 08:02:57.074759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.543 [2024-07-15 08:02:57.074780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.543 [2024-07-15 08:02:57.074788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.543 [2024-07-15 08:02:57.084946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.543 [2024-07-15 08:02:57.084971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.543 [2024-07-15 08:02:57.084979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.543 [2024-07-15 08:02:57.093885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.543 [2024-07-15 08:02:57.093907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.543 [2024-07-15 08:02:57.093915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.543 [2024-07-15 08:02:57.105749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.543 [2024-07-15 08:02:57.105770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.543 [2024-07-15 08:02:57.105778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.543 [2024-07-15 08:02:57.117898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.543 [2024-07-15 08:02:57.117919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.543 [2024-07-15 08:02:57.117927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.543 [2024-07-15 08:02:57.127179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.543 [2024-07-15 08:02:57.127200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.543 [2024-07-15 08:02:57.127208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.543 [2024-07-15 08:02:57.138422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.543 [2024-07-15 08:02:57.138444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.543 [2024-07-15 08:02:57.138452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.543 [2024-07-15 08:02:57.146732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.543 [2024-07-15 08:02:57.146752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.543 [2024-07-15 08:02:57.146760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.543 [2024-07-15 08:02:57.158247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.543 [2024-07-15 08:02:57.158266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.543 [2024-07-15 08:02:57.158275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.543 [2024-07-15 08:02:57.167835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.543 [2024-07-15 08:02:57.167855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.544 [2024-07-15 08:02:57.167866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.544 [2024-07-15 08:02:57.176343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.544 [2024-07-15 08:02:57.176363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.544 [2024-07-15 08:02:57.176371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.544 [2024-07-15 08:02:57.187464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.544 [2024-07-15 08:02:57.187484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.544 [2024-07-15 08:02:57.187491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.544 [2024-07-15 08:02:57.199351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.544 [2024-07-15 08:02:57.199371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.544 [2024-07-15 08:02:57.199380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.544 [2024-07-15 08:02:57.208153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.544 [2024-07-15 08:02:57.208173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.544 [2024-07-15 08:02:57.208180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.544 [2024-07-15 08:02:57.218848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.544 [2024-07-15 08:02:57.218869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.544 [2024-07-15 08:02:57.218878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.544 [2024-07-15 08:02:57.227174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.544 [2024-07-15 08:02:57.227194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.544 [2024-07-15 08:02:57.227201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.544 [2024-07-15 08:02:57.238405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.544 [2024-07-15 08:02:57.238425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.544 [2024-07-15 08:02:57.238433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.544 [2024-07-15 08:02:57.248509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.544 [2024-07-15 08:02:57.248529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.544 [2024-07-15 08:02:57.248537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.544 [2024-07-15 08:02:57.257629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.544 [2024-07-15 08:02:57.257655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.544 [2024-07-15 08:02:57.257663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.544 [2024-07-15 08:02:57.267895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.544 [2024-07-15 08:02:57.267914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.544 [2024-07-15 08:02:57.267922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.544 [2024-07-15 08:02:57.276706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.544 [2024-07-15 08:02:57.276726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.544 [2024-07-15 08:02:57.276734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.544 [2024-07-15 08:02:57.287088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.544 [2024-07-15 08:02:57.287108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.544 [2024-07-15 08:02:57.287116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.803 [2024-07-15 08:02:57.296247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.803 [2024-07-15 08:02:57.296269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.803 [2024-07-15 08:02:57.296277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.803 [2024-07-15 08:02:57.306187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.803 [2024-07-15 08:02:57.306207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.803 [2024-07-15 08:02:57.306215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.803 [2024-07-15 08:02:57.315003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.803 [2024-07-15 08:02:57.315024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.803 [2024-07-15 08:02:57.315031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.803 [2024-07-15 08:02:57.325689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.803 [2024-07-15 08:02:57.325709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.803 [2024-07-15 08:02:57.325717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.803 [2024-07-15 08:02:57.333546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.803 [2024-07-15 08:02:57.333566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.803 [2024-07-15 08:02:57.333573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.803 [2024-07-15 08:02:57.344093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.804 [2024-07-15 08:02:57.344113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.804 [2024-07-15 08:02:57.344121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.804 [2024-07-15 08:02:57.352687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.804 [2024-07-15 08:02:57.352707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.804 [2024-07-15 08:02:57.352715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.804 [2024-07-15 08:02:57.362485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.804 [2024-07-15 08:02:57.362504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.804 [2024-07-15 08:02:57.362512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.804 [2024-07-15 08:02:57.370956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.804 [2024-07-15 08:02:57.370977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.804 [2024-07-15 08:02:57.370984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.804 [2024-07-15 08:02:57.382346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.804 [2024-07-15 08:02:57.382366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.804 [2024-07-15 08:02:57.382374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.804 [2024-07-15 08:02:57.394845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1595f20) 00:27:12.804 [2024-07-15 08:02:57.394866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.804 [2024-07-15 08:02:57.394873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.804 00:27:12.804 Latency(us) 00:27:12.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.804 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:12.804 nvme0n1 : 2.05 24255.57 94.75 0.00 0.00 5169.02 2592.95 52656.75 00:27:12.804 =================================================================================================================== 00:27:12.804 Total : 24255.57 94.75 0.00 0.00 5169.02 2592.95 52656.75 00:27:12.804 0 00:27:12.804 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:12.804 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:12.804 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:12.804 | .driver_specific 00:27:12.804 | .nvme_error 00:27:12.804 | .status_code 00:27:12.804 | .command_transient_transport_error' 00:27:12.804 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:13.062 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 194 > 0 )) 00:27:13.062 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3401051 00:27:13.062 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3401051 ']' 00:27:13.062 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3401051 00:27:13.062 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:13.062 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:13.062 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3401051 00:27:13.062 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:13.062 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:13.062 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3401051' 00:27:13.062 killing process with pid 3401051 00:27:13.062 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3401051 00:27:13.062 Received shutdown signal, test time was about 2.000000 seconds 00:27:13.062 00:27:13.062 Latency(us) 00:27:13.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:13.062 =================================================================================================================== 00:27:13.062 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:13.062 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3401051 00:27:13.321 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:13.321 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:13.321 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:13.321 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:13.321 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:13.321 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3401751 00:27:13.321 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3401751 /var/tmp/bperf.sock 00:27:13.321 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:13.321 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3401751 ']' 00:27:13.321 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:13.321 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:13.321 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:13.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:13.321 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:13.321 08:02:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:13.321 [2024-07-15 08:02:57.904974] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:27:13.321 [2024-07-15 08:02:57.905022] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3401751 ] 00:27:13.321 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:13.321 Zero copy mechanism will not be used. 00:27:13.321 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.321 [2024-07-15 08:02:57.973238] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.321 [2024-07-15 08:02:58.044882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.257 08:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:14.257 08:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:14.257 08:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:14.257 08:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:14.257 08:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:14.257 08:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.257 08:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.257 08:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.257 08:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:14.257 08:02:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:14.515 nvme0n1 00:27:14.515 08:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:14.515 08:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.515 08:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.515 08:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.515 08:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:14.515 08:02:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:14.773 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:14.773 Zero copy mechanism will not be used. 00:27:14.773 Running I/O for 2 seconds... 00:27:14.773 [2024-07-15 08:02:59.341490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.773 [2024-07-15 08:02:59.341527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.773 [2024-07-15 08:02:59.341538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.773 [2024-07-15 08:02:59.348116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.773 [2024-07-15 08:02:59.348138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.773 [2024-07-15 08:02:59.348147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.773 [2024-07-15 08:02:59.354789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.773 [2024-07-15 08:02:59.354808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.773 [2024-07-15 08:02:59.354816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.773 [2024-07-15 08:02:59.361222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.773 [2024-07-15 08:02:59.361252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.773 [2024-07-15 08:02:59.361260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.773 [2024-07-15 08:02:59.367499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.773 [2024-07-15 08:02:59.367519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.773 [2024-07-15 08:02:59.367526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.773 [2024-07-15 08:02:59.373659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.773 [2024-07-15 08:02:59.373679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.773 [2024-07-15 08:02:59.373686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.773 [2024-07-15 08:02:59.379680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.773 [2024-07-15 08:02:59.379699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.773 [2024-07-15 08:02:59.379706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.773 [2024-07-15 08:02:59.385602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.773 [2024-07-15 08:02:59.385621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.773 [2024-07-15 08:02:59.385629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.773 [2024-07-15 08:02:59.390865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.774 [2024-07-15 08:02:59.390884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.774 [2024-07-15 08:02:59.390891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.774 [2024-07-15 08:02:59.396453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.774 [2024-07-15 08:02:59.396474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.774 [2024-07-15 08:02:59.396481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.774 [2024-07-15 08:02:59.402042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.774 [2024-07-15 08:02:59.402063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.774 [2024-07-15 08:02:59.402071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.774 [2024-07-15 08:02:59.407539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.774 [2024-07-15 08:02:59.407560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.774 [2024-07-15 08:02:59.407568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.774 [2024-07-15 08:02:59.413721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.774 [2024-07-15 08:02:59.413743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.774 [2024-07-15 08:02:59.413751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.774 [2024-07-15 08:02:59.419845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.774 [2024-07-15 08:02:59.419866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.774 [2024-07-15 08:02:59.419874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.774 [2024-07-15 08:02:59.425902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.774 [2024-07-15 08:02:59.425924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.774 [2024-07-15 08:02:59.425932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.774 [2024-07-15 08:02:59.431922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.774 [2024-07-15 08:02:59.431942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.774 [2024-07-15 08:02:59.431950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.774 [2024-07-15 08:02:59.437703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.774 [2024-07-15 08:02:59.437723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.774 [2024-07-15 08:02:59.437731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.774 [2024-07-15 08:02:59.443417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.774 [2024-07-15 08:02:59.443437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.774 [2024-07-15 08:02:59.443444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.774 [2024-07-15 08:02:59.449117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.774 [2024-07-15 08:02:59.449137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.774 [2024-07-15 08:02:59.449145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.774 [2024-07-15 08:02:59.455007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.774 [2024-07-15 08:02:59.455028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.774 [2024-07-15 08:02:59.455036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.774 [2024-07-15 08:02:59.460314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.774 [2024-07-15 08:02:59.460334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.774 [2024-07-15 08:02:59.460345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.774 [2024-07-15 08:02:59.465493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.774 [2024-07-15 08:02:59.465513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.774 [2024-07-15 08:02:59.465521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.774 [2024-07-15 08:02:59.470748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.774 [2024-07-15 08:02:59.470769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.774 [2024-07-15 08:02:59.470776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.774 [2024-07-15 08:02:59.476190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.774 [2024-07-15 08:02:59.476211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.774 [2024-07-15 08:02:59.476219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.774 [2024-07-15 08:02:59.481505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.774 [2024-07-15 08:02:59.481525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.774 [2024-07-15 08:02:59.481533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.774 [2024-07-15 08:02:59.486830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.774 [2024-07-15 08:02:59.486850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.774 [2024-07-15 08:02:59.486858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.774 [2024-07-15 08:02:59.492197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.774 [2024-07-15 08:02:59.492218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.774 [2024-07-15 08:02:59.492232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.774 [2024-07-15 08:02:59.497472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.774 [2024-07-15 08:02:59.497492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.774 [2024-07-15 08:02:59.497500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.774 [2024-07-15 08:02:59.502710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.774 [2024-07-15 08:02:59.502730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.774 [2024-07-15 08:02:59.502738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.774 [2024-07-15 08:02:59.508024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.774 [2024-07-15 08:02:59.508048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.774 [2024-07-15 08:02:59.508055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.774 [2024-07-15 08:02:59.513458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.774 [2024-07-15 08:02:59.513479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.774 [2024-07-15 08:02:59.513486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.774 [2024-07-15 08:02:59.518882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.774 [2024-07-15 08:02:59.518903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.774 [2024-07-15 08:02:59.518910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.774 [2024-07-15 08:02:59.524487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:14.774 [2024-07-15 08:02:59.524508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.774 [2024-07-15 08:02:59.524515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.034 [2024-07-15 08:02:59.530007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.034 [2024-07-15 08:02:59.530029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.034 [2024-07-15 08:02:59.530036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.034 [2024-07-15 08:02:59.535418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.034 [2024-07-15 08:02:59.535439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.034 [2024-07-15 08:02:59.535446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.034 [2024-07-15 08:02:59.540710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.034 [2024-07-15 08:02:59.540731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.034 [2024-07-15 08:02:59.540739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.034 [2024-07-15 08:02:59.546009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.034 [2024-07-15 08:02:59.546030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.034 [2024-07-15 08:02:59.546037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.034 [2024-07-15 08:02:59.551484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.034 [2024-07-15 08:02:59.551505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.034 [2024-07-15 08:02:59.551513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.034 [2024-07-15 08:02:59.557026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.034 [2024-07-15 08:02:59.557047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.034 [2024-07-15 08:02:59.557054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.034 [2024-07-15 08:02:59.562632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.034 [2024-07-15 08:02:59.562652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.034 [2024-07-15 08:02:59.562660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.034 [2024-07-15 08:02:59.568137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.034 [2024-07-15 08:02:59.568157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.034 [2024-07-15 08:02:59.568165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.034 [2024-07-15 08:02:59.572853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.034 [2024-07-15 08:02:59.572872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.034 [2024-07-15 08:02:59.572880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.034 [2024-07-15 08:02:59.576004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.034 [2024-07-15 08:02:59.576022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.035 [2024-07-15 08:02:59.576030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.035 [2024-07-15 08:02:59.581231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.035 [2024-07-15 08:02:59.581251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.035 [2024-07-15 08:02:59.581258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.035 [2024-07-15 08:02:59.586844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.035 [2024-07-15 08:02:59.586864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.035 [2024-07-15 08:02:59.586871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.035 [2024-07-15 08:02:59.593620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.035 [2024-07-15 08:02:59.593640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.035 [2024-07-15 08:02:59.593648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.035 [2024-07-15 08:02:59.601167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.035 [2024-07-15 08:02:59.601187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.035 [2024-07-15 08:02:59.601199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.035 [2024-07-15 08:02:59.607798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.035 [2024-07-15 08:02:59.607819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.035 [2024-07-15 08:02:59.607827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.035 [2024-07-15 08:02:59.614053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.035 [2024-07-15 08:02:59.614075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.035 [2024-07-15 08:02:59.614083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.035 [2024-07-15 08:02:59.621396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.035 [2024-07-15 08:02:59.621418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.035 [2024-07-15 08:02:59.621426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.035 [2024-07-15 08:02:59.629573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.035 [2024-07-15 08:02:59.629595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.035 [2024-07-15 08:02:59.629603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.035 [2024-07-15 08:02:59.636829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.035 [2024-07-15 08:02:59.636850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.035 [2024-07-15 08:02:59.636858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.035 [2024-07-15 08:02:59.643873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.035 [2024-07-15 08:02:59.643893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.035 [2024-07-15 08:02:59.643901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.035 [2024-07-15 08:02:59.651575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.035 [2024-07-15 08:02:59.651597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.035 [2024-07-15 08:02:59.651605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.035 [2024-07-15 08:02:59.659701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.035 [2024-07-15 08:02:59.659723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.035 [2024-07-15 08:02:59.659731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.035 [2024-07-15 08:02:59.667680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.035 [2024-07-15 08:02:59.667701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.035 [2024-07-15 08:02:59.667709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.035 [2024-07-15 08:02:59.674628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.035 [2024-07-15 08:02:59.674649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.035 [2024-07-15 08:02:59.674657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.035 [2024-07-15 08:02:59.681302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.035 [2024-07-15 08:02:59.681323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.035 [2024-07-15 08:02:59.681331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.035 [2024-07-15 08:02:59.687772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.035 [2024-07-15 08:02:59.687793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.035 [2024-07-15 08:02:59.687800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.035 [2024-07-15 08:02:59.694022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.035 [2024-07-15 08:02:59.694043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.035 [2024-07-15 08:02:59.694051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.035 [2024-07-15 08:02:59.700393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.035 [2024-07-15 08:02:59.700414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.035 [2024-07-15 08:02:59.700422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.035 [2024-07-15 08:02:59.706806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.035 [2024-07-15 08:02:59.706828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.035 [2024-07-15 08:02:59.706835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.035 [2024-07-15 08:02:59.712574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.035 [2024-07-15 08:02:59.712596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.035 [2024-07-15 08:02:59.712603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.035 [2024-07-15 08:02:59.718350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.035 [2024-07-15 08:02:59.718371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.035 [2024-07-15 08:02:59.718383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.035 [2024-07-15 08:02:59.724128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.036 [2024-07-15 08:02:59.724149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.036 [2024-07-15 08:02:59.724157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.036 [2024-07-15 08:02:59.729281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.036 [2024-07-15 08:02:59.729302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.036 [2024-07-15 08:02:59.729310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.036 [2024-07-15 08:02:59.734768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.036 [2024-07-15 08:02:59.734790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.036 [2024-07-15 08:02:59.734798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.036 [2024-07-15 08:02:59.740317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.036 [2024-07-15 08:02:59.740339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.036 [2024-07-15 08:02:59.740347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.036 [2024-07-15 08:02:59.745626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.036 [2024-07-15 08:02:59.745647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.036 [2024-07-15 08:02:59.745656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.036 [2024-07-15 08:02:59.750994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.036 [2024-07-15 08:02:59.751015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.036 [2024-07-15 08:02:59.751023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.036 [2024-07-15 08:02:59.756501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.036 [2024-07-15 08:02:59.756523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.036 [2024-07-15 08:02:59.756531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.036 [2024-07-15 08:02:59.761993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.036 [2024-07-15 08:02:59.762014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.036 [2024-07-15 08:02:59.762022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.036 [2024-07-15 08:02:59.767617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.036 [2024-07-15 08:02:59.767641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.036 [2024-07-15 08:02:59.767649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.036 [2024-07-15 08:02:59.773277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.036 [2024-07-15 08:02:59.773297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.036 [2024-07-15 08:02:59.773305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.036 [2024-07-15 08:02:59.778785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.036 [2024-07-15 08:02:59.778806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.036 [2024-07-15 08:02:59.778813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.036 [2024-07-15 08:02:59.784429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.036 [2024-07-15 08:02:59.784458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.036 [2024-07-15 08:02:59.784466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.296 [2024-07-15 08:02:59.790063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.296 [2024-07-15 08:02:59.790085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.296 [2024-07-15 08:02:59.790093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.296 [2024-07-15 08:02:59.795790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.297 [2024-07-15 08:02:59.795811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.297 [2024-07-15 08:02:59.795819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.297 [2024-07-15 08:02:59.801497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.297 [2024-07-15 08:02:59.801518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.297 [2024-07-15 08:02:59.801526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.297 [2024-07-15 08:02:59.807106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.297 [2024-07-15 08:02:59.807127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.297 [2024-07-15 08:02:59.807135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.297 [2024-07-15 08:02:59.812875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.297 [2024-07-15 08:02:59.812896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.297 [2024-07-15 08:02:59.812904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.297 [2024-07-15 08:02:59.818502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.297 [2024-07-15 08:02:59.818523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.297 [2024-07-15 08:02:59.818531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.297 [2024-07-15 08:02:59.824137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.297 [2024-07-15 08:02:59.824158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.297 [2024-07-15 08:02:59.824166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.297 [2024-07-15 08:02:59.829625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.297 [2024-07-15 08:02:59.829647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.297 [2024-07-15 08:02:59.829655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.297 [2024-07-15 08:02:59.835354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.297 [2024-07-15 08:02:59.835376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.297 [2024-07-15 08:02:59.835383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.297 [2024-07-15 08:02:59.840822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.297 [2024-07-15 08:02:59.840843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.297 [2024-07-15 08:02:59.840852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.297 [2024-07-15 08:02:59.846270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.297 [2024-07-15 08:02:59.846291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.297 [2024-07-15 08:02:59.846299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.297 [2024-07-15 08:02:59.851906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.297 [2024-07-15 08:02:59.851927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.297 [2024-07-15 08:02:59.851935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.297 [2024-07-15 08:02:59.857413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.297 [2024-07-15 08:02:59.857434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.297 [2024-07-15 08:02:59.857442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.297 [2024-07-15 08:02:59.862969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.297 [2024-07-15 08:02:59.862990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.297 [2024-07-15 08:02:59.863001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.297 [2024-07-15 08:02:59.868644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.297 [2024-07-15 08:02:59.868666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.297 [2024-07-15 08:02:59.868674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.297 [2024-07-15 08:02:59.874156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.297 [2024-07-15 08:02:59.874177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.297 [2024-07-15 08:02:59.874185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.297 [2024-07-15 08:02:59.879736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.297 [2024-07-15 08:02:59.879758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.297 [2024-07-15 08:02:59.879766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.297 [2024-07-15 08:02:59.885373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.297 [2024-07-15 08:02:59.885395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.297 [2024-07-15 08:02:59.885403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.297 [2024-07-15 08:02:59.891066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.297 [2024-07-15 08:02:59.891088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.297 [2024-07-15 08:02:59.891095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.297 [2024-07-15 08:02:59.896779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.297 [2024-07-15 08:02:59.896800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.297 [2024-07-15 08:02:59.896808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.297 [2024-07-15 08:02:59.902501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.297 [2024-07-15 08:02:59.902522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.297 [2024-07-15 08:02:59.902530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.297 [2024-07-15 08:02:59.908258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.297 [2024-07-15 08:02:59.908279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.297 [2024-07-15 08:02:59.908287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.297 [2024-07-15 08:02:59.913939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.297 [2024-07-15 08:02:59.913968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.297 [2024-07-15 08:02:59.913976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.297 [2024-07-15 08:02:59.919763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.297 [2024-07-15 08:02:59.919785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.297 [2024-07-15 08:02:59.919793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.297 [2024-07-15 08:02:59.925446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.298 [2024-07-15 08:02:59.925468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.298 [2024-07-15 08:02:59.925476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.298 [2024-07-15 08:02:59.931147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.298 [2024-07-15 08:02:59.931169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.298 [2024-07-15 08:02:59.931177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.298 [2024-07-15 08:02:59.937154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.298 [2024-07-15 08:02:59.937177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.298 [2024-07-15 08:02:59.937185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.298 [2024-07-15 08:02:59.942888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.298 [2024-07-15 08:02:59.942909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.298 [2024-07-15 08:02:59.942917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.298 [2024-07-15 08:02:59.948698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.298 [2024-07-15 08:02:59.948720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.298 [2024-07-15 08:02:59.948727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.298 [2024-07-15 08:02:59.954544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.298 [2024-07-15 08:02:59.954565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.298 [2024-07-15 08:02:59.954573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.298 [2024-07-15 08:02:59.960443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.298 [2024-07-15 08:02:59.960466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.298 [2024-07-15 08:02:59.960475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.298 [2024-07-15 08:02:59.966214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.298 [2024-07-15 08:02:59.966243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.298 [2024-07-15 08:02:59.966252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.298 [2024-07-15 08:02:59.971909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.298 [2024-07-15 08:02:59.971931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.298 [2024-07-15 08:02:59.971939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.298 [2024-07-15 08:02:59.977620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.298 [2024-07-15 08:02:59.977642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.298 [2024-07-15 08:02:59.977650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.298 [2024-07-15 08:02:59.983263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.298 [2024-07-15 08:02:59.983285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.298 [2024-07-15 08:02:59.983293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.298 [2024-07-15 08:02:59.988953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.298 [2024-07-15 08:02:59.988974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.298 [2024-07-15 08:02:59.988982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.298 [2024-07-15 08:02:59.994874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.298 [2024-07-15 08:02:59.994895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.298 [2024-07-15 08:02:59.994904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.298 [2024-07-15 08:03:00.000874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.298 [2024-07-15 08:03:00.000897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.298 [2024-07-15 08:03:00.000905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.298 [2024-07-15 08:03:00.006852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.298 [2024-07-15 08:03:00.006875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.298 [2024-07-15 08:03:00.006885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.298 [2024-07-15 08:03:00.012431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.298 [2024-07-15 08:03:00.012453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.298 [2024-07-15 08:03:00.012465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.298 [2024-07-15 08:03:00.018155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.298 [2024-07-15 08:03:00.018179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.298 [2024-07-15 08:03:00.018187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.298 [2024-07-15 08:03:00.025064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.298 [2024-07-15 08:03:00.025090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.298 [2024-07-15 08:03:00.025099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.298 [2024-07-15 08:03:00.031186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.298 [2024-07-15 08:03:00.031210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.298 [2024-07-15 08:03:00.031219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.298 [2024-07-15 08:03:00.037745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.298 [2024-07-15 08:03:00.037768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.298 [2024-07-15 08:03:00.037778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.298 [2024-07-15 08:03:00.045778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.298 [2024-07-15 08:03:00.045802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.298 [2024-07-15 08:03:00.045811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.559 [2024-07-15 08:03:00.053474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.559 [2024-07-15 08:03:00.053497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.559 [2024-07-15 08:03:00.053507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.559 [2024-07-15 08:03:00.060359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.559 [2024-07-15 08:03:00.060382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.559 [2024-07-15 08:03:00.060390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.559 [2024-07-15 08:03:00.067816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.559 [2024-07-15 08:03:00.067839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.559 [2024-07-15 08:03:00.067850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.559 [2024-07-15 08:03:00.075484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.559 [2024-07-15 08:03:00.075508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.559 [2024-07-15 08:03:00.075517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.559 [2024-07-15 08:03:00.083026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.559 [2024-07-15 08:03:00.083049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.559 [2024-07-15 08:03:00.083058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.559 [2024-07-15 08:03:00.089990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.559 [2024-07-15 08:03:00.090012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.559 [2024-07-15 08:03:00.090020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.559 [2024-07-15 08:03:00.096345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.559 [2024-07-15 08:03:00.096377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.559 [2024-07-15 08:03:00.096387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.559 [2024-07-15 08:03:00.102579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.559 [2024-07-15 08:03:00.102602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.559 [2024-07-15 08:03:00.102610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.559 [2024-07-15 08:03:00.108657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.559 [2024-07-15 08:03:00.108679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.559 [2024-07-15 08:03:00.108687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.559 [2024-07-15 08:03:00.114608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.559 [2024-07-15 08:03:00.114630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.559 [2024-07-15 08:03:00.114638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.559 [2024-07-15 08:03:00.120387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.559 [2024-07-15 08:03:00.120408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.559 [2024-07-15 08:03:00.120416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.559 [2024-07-15 08:03:00.126055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.559 [2024-07-15 08:03:00.126076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.559 [2024-07-15 08:03:00.126088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.559 [2024-07-15 08:03:00.131212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.559 [2024-07-15 08:03:00.131242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.559 [2024-07-15 08:03:00.131250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.559 [2024-07-15 08:03:00.137000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.559 [2024-07-15 08:03:00.137023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.559 [2024-07-15 08:03:00.137031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.559 [2024-07-15 08:03:00.142652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.559 [2024-07-15 08:03:00.142675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.559 [2024-07-15 08:03:00.142683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.559 [2024-07-15 08:03:00.148073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.559 [2024-07-15 08:03:00.148095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.148103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.153510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.153532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.153540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.158975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.158997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.159005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.164563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.164585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.164593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.170269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.170292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.170300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.175852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.175878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.175887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.181249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.181270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.181277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.186675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.186697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.186705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.192356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.192379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.192387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.197959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.197982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.197991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.203670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.203693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.203702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.209404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.209427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.209435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.215126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.215150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.215158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.220960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.220983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.220992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.226440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.226462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.226472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.232089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.232110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.232119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.237967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.237989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.237997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.243667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.243689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.243697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.249449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.249471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.249479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.255107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.255129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.255137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.261100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.261122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.261130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.266990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.267012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.267020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.272673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.272696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.272708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.278567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.278589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.278597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.284401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.284424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.284433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.290616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.290638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.290646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.296084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.296107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.296115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.301585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.301606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.301615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.560 [2024-07-15 08:03:00.307071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.560 [2024-07-15 08:03:00.307093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.560 [2024-07-15 08:03:00.307101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.312501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.312524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.312532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.316068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.316090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.316099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.320651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.320675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.320684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.326049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.326070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.326078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.331560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.331581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.331589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.337089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.337110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.337119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.342435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.342456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.342464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.347755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.347776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.347784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.353160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.353182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.353190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.358691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.358712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.358720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.364388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.364409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.364417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.369994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.370013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.370021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.375888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.375909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.375918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.381420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.381441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.381450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.387020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.387040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.387050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.393111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.393132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.393140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.398956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.398977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.398985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.404711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.404731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.404739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.410287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.410308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.410316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.415798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.415819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.415830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.421402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.421423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.421431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.427293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.427314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.427322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.432960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.432980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.432988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.438991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.439012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.439020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.444751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.444772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.444780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.450485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.450506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.450514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.456068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.456090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.456098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.461710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.461731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.461740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.467218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.467248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.467257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.472974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.472995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.473004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.478716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.820 [2024-07-15 08:03:00.478737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.820 [2024-07-15 08:03:00.478745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.820 [2024-07-15 08:03:00.484256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.821 [2024-07-15 08:03:00.484277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.821 [2024-07-15 08:03:00.484285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.821 [2024-07-15 08:03:00.489593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.821 [2024-07-15 08:03:00.489615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.821 [2024-07-15 08:03:00.489623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.821 [2024-07-15 08:03:00.495130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.821 [2024-07-15 08:03:00.495150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.821 [2024-07-15 08:03:00.495158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.821 [2024-07-15 08:03:00.501179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.821 [2024-07-15 08:03:00.501200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.821 [2024-07-15 08:03:00.501208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.821 [2024-07-15 08:03:00.508102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.821 [2024-07-15 08:03:00.508122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.821 [2024-07-15 08:03:00.508130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.821 [2024-07-15 08:03:00.514830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.821 [2024-07-15 08:03:00.514851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.821 [2024-07-15 08:03:00.514860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.821 [2024-07-15 08:03:00.521214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.821 [2024-07-15 08:03:00.521240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.821 [2024-07-15 08:03:00.521248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.821 [2024-07-15 08:03:00.527638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.821 [2024-07-15 08:03:00.527659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.821 [2024-07-15 08:03:00.527669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.821 [2024-07-15 08:03:00.533713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.821 [2024-07-15 08:03:00.533734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.821 [2024-07-15 08:03:00.533742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.821 [2024-07-15 08:03:00.539690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.821 [2024-07-15 08:03:00.539712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.821 [2024-07-15 08:03:00.539720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.821 [2024-07-15 08:03:00.545074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.821 [2024-07-15 08:03:00.545096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.821 [2024-07-15 08:03:00.545104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.821 [2024-07-15 08:03:00.550849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.821 [2024-07-15 08:03:00.550871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.821 [2024-07-15 08:03:00.550880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.821 [2024-07-15 08:03:00.556454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.821 [2024-07-15 08:03:00.556476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.821 [2024-07-15 08:03:00.556484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.821 [2024-07-15 08:03:00.562029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.821 [2024-07-15 08:03:00.562051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.821 [2024-07-15 08:03:00.562059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.821 [2024-07-15 08:03:00.567421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:15.821 [2024-07-15 08:03:00.567446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.821 [2024-07-15 08:03:00.567455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.080 [2024-07-15 08:03:00.572893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.080 [2024-07-15 08:03:00.572915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.080 [2024-07-15 08:03:00.572924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.080 [2024-07-15 08:03:00.578385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.080 [2024-07-15 08:03:00.578406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.080 [2024-07-15 08:03:00.578414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.080 [2024-07-15 08:03:00.584143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.080 [2024-07-15 08:03:00.584164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.080 [2024-07-15 08:03:00.584172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.080 [2024-07-15 08:03:00.589695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.080 [2024-07-15 08:03:00.589716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.080 [2024-07-15 08:03:00.589724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.080 [2024-07-15 08:03:00.595314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.080 [2024-07-15 08:03:00.595335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.080 [2024-07-15 08:03:00.595344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.080 [2024-07-15 08:03:00.601377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.080 [2024-07-15 08:03:00.601399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.080 [2024-07-15 08:03:00.601407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.080 [2024-07-15 08:03:00.606977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.080 [2024-07-15 08:03:00.606999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.080 [2024-07-15 08:03:00.607007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.080 [2024-07-15 08:03:00.612495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.080 [2024-07-15 08:03:00.612518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.080 [2024-07-15 08:03:00.612526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.080 [2024-07-15 08:03:00.618129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.080 [2024-07-15 08:03:00.618152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.080 [2024-07-15 08:03:00.618160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.080 [2024-07-15 08:03:00.623544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.080 [2024-07-15 08:03:00.623566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.080 [2024-07-15 08:03:00.623574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.080 [2024-07-15 08:03:00.628990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.080 [2024-07-15 08:03:00.629011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.080 [2024-07-15 08:03:00.629021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.080 [2024-07-15 08:03:00.634491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.080 [2024-07-15 08:03:00.634512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.080 [2024-07-15 08:03:00.634520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.080 [2024-07-15 08:03:00.639891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.080 [2024-07-15 08:03:00.639914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.080 [2024-07-15 08:03:00.639922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.080 [2024-07-15 08:03:00.645337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.080 [2024-07-15 08:03:00.645359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.080 [2024-07-15 08:03:00.645368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.080 [2024-07-15 08:03:00.650774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.080 [2024-07-15 08:03:00.650796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.080 [2024-07-15 08:03:00.650804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.080 [2024-07-15 08:03:00.656414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.080 [2024-07-15 08:03:00.656435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.080 [2024-07-15 08:03:00.656443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.080 [2024-07-15 08:03:00.662173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.080 [2024-07-15 08:03:00.662194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.080 [2024-07-15 08:03:00.662208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.080 [2024-07-15 08:03:00.667552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.080 [2024-07-15 08:03:00.667573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.080 [2024-07-15 08:03:00.667581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.080 [2024-07-15 08:03:00.673137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.080 [2024-07-15 08:03:00.673158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.080 [2024-07-15 08:03:00.673166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.080 [2024-07-15 08:03:00.679015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.080 [2024-07-15 08:03:00.679037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.080 [2024-07-15 08:03:00.679045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.080 [2024-07-15 08:03:00.684949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.684972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.684980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.690264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.690287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.690295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.695747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.695768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.695777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.701352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.701374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.701382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.707216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.707244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.707253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.713329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.713356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.713364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.719289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.719311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.719319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.725037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.725059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.725066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.730301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.730323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.730331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.735868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.735890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.735897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.741697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.741718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.741726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.747052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.747075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.747083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.752559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.752581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.752588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.757866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.757887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.757895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.763205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.763233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.763242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.768593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.768616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.768625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.773968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.773990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.773998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.779623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.779645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.779653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.783241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.783263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.783271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.787453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.787475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.787483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.792710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.792732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.792740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.797876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.797898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.797906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.803508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.803529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.803539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.808852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.808874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.808882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.814084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.814108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.814118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.819507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.819529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.819537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.825046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.825068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.825076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.081 [2024-07-15 08:03:00.830970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.081 [2024-07-15 08:03:00.830992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.081 [2024-07-15 08:03:00.831000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.340 [2024-07-15 08:03:00.836459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.340 [2024-07-15 08:03:00.836491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.340 [2024-07-15 08:03:00.836499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.340 [2024-07-15 08:03:00.841819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.340 [2024-07-15 08:03:00.841841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.340 [2024-07-15 08:03:00.841849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.340 [2024-07-15 08:03:00.847291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.340 [2024-07-15 08:03:00.847313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.340 [2024-07-15 08:03:00.847321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.340 [2024-07-15 08:03:00.852849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.340 [2024-07-15 08:03:00.852874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.340 [2024-07-15 08:03:00.852882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.340 [2024-07-15 08:03:00.858353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.340 [2024-07-15 08:03:00.858375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.340 [2024-07-15 08:03:00.858385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.340 [2024-07-15 08:03:00.863991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.340 [2024-07-15 08:03:00.864013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.340 [2024-07-15 08:03:00.864022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.340 [2024-07-15 08:03:00.869452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.340 [2024-07-15 08:03:00.869473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.340 [2024-07-15 08:03:00.869482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.340 [2024-07-15 08:03:00.874964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.340 [2024-07-15 08:03:00.874985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.340 [2024-07-15 08:03:00.874993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.340 [2024-07-15 08:03:00.880885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.340 [2024-07-15 08:03:00.880906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.340 [2024-07-15 08:03:00.880914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.340 [2024-07-15 08:03:00.886654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.340 [2024-07-15 08:03:00.886676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.340 [2024-07-15 08:03:00.886684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.340 [2024-07-15 08:03:00.892397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.340 [2024-07-15 08:03:00.892418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.340 [2024-07-15 08:03:00.892426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.340 [2024-07-15 08:03:00.898186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.340 [2024-07-15 08:03:00.898208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.340 [2024-07-15 08:03:00.898216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.340 [2024-07-15 08:03:00.903872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.340 [2024-07-15 08:03:00.903894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.340 [2024-07-15 08:03:00.903902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.340 [2024-07-15 08:03:00.909403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.340 [2024-07-15 08:03:00.909426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.340 [2024-07-15 08:03:00.909435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.340 [2024-07-15 08:03:00.914464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.340 [2024-07-15 08:03:00.914486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.340 [2024-07-15 08:03:00.914495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.340 [2024-07-15 08:03:00.919860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.340 [2024-07-15 08:03:00.919883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.340 [2024-07-15 08:03:00.919891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.340 [2024-07-15 08:03:00.925393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:00.925415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:00.925424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:00.930725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:00.930746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:00.930755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:00.936112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:00.936133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:00.936142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:00.942264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:00.942285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:00.942293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:00.949948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:00.949973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:00.949982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:00.957521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:00.957544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:00.957552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:00.964503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:00.964525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:00.964533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:00.970638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:00.970660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:00.970669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:00.976570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:00.976592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:00.976600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:00.982481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:00.982504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:00.982512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:00.988632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:00.988654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:00.988662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:00.994191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:00.994213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:00.994221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:00.999714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:00.999736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:00.999745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:01.005369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:01.005391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:01.005400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:01.010779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:01.010801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:01.010810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:01.016120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:01.016141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:01.016150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:01.021683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:01.021706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:01.021715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:01.027331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:01.027352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:01.027360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:01.032802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:01.032824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:01.032832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:01.038214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:01.038242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:01.038249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:01.043554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:01.043575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:01.043582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:01.048909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:01.048930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:01.048942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:01.054289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:01.054310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:01.054318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:01.059764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:01.059786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:01.059794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:01.065166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:01.065187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:01.065196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:01.070870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:01.070890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:01.070899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:01.076597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:01.076619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:01.076627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:01.082072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:01.082094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:01.082102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.341 [2024-07-15 08:03:01.087545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.341 [2024-07-15 08:03:01.087566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.341 [2024-07-15 08:03:01.087575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.600 [2024-07-15 08:03:01.093033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.600 [2024-07-15 08:03:01.093055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-15 08:03:01.093064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.600 [2024-07-15 08:03:01.098547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.600 [2024-07-15 08:03:01.098573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-15 08:03:01.098581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.600 [2024-07-15 08:03:01.104037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.600 [2024-07-15 08:03:01.104058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-15 08:03:01.104066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.600 [2024-07-15 08:03:01.109521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.600 [2024-07-15 08:03:01.109543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-15 08:03:01.109551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.600 [2024-07-15 08:03:01.114966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.600 [2024-07-15 08:03:01.114989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-15 08:03:01.114997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.600 [2024-07-15 08:03:01.120395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.600 [2024-07-15 08:03:01.120416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-15 08:03:01.120424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.600 [2024-07-15 08:03:01.125860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.600 [2024-07-15 08:03:01.125882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-15 08:03:01.125890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.600 [2024-07-15 08:03:01.131473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.600 [2024-07-15 08:03:01.131493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-15 08:03:01.131501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.600 [2024-07-15 08:03:01.137171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.600 [2024-07-15 08:03:01.137193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-15 08:03:01.137201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.600 [2024-07-15 08:03:01.142710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.600 [2024-07-15 08:03:01.142732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-15 08:03:01.142739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.600 [2024-07-15 08:03:01.148261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.600 [2024-07-15 08:03:01.148282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-15 08:03:01.148290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.600 [2024-07-15 08:03:01.154003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.600 [2024-07-15 08:03:01.154025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-15 08:03:01.154033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.600 [2024-07-15 08:03:01.159671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.600 [2024-07-15 08:03:01.159692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-15 08:03:01.159701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.600 [2024-07-15 08:03:01.165118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.600 [2024-07-15 08:03:01.165139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-15 08:03:01.165147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.600 [2024-07-15 08:03:01.170553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.600 [2024-07-15 08:03:01.170574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-15 08:03:01.170582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.600 [2024-07-15 08:03:01.176044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.600 [2024-07-15 08:03:01.176065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-15 08:03:01.176072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.600 [2024-07-15 08:03:01.181665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.600 [2024-07-15 08:03:01.181686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-15 08:03:01.181695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.600 [2024-07-15 08:03:01.187192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.187213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.187221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.192713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.192734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.192744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.198214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.198240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.198248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.203680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.203703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.203712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.209170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.209193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.209201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.214822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.214845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.214853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.220417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.220440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.220448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.225848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.225869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.225877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.231303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.231324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.231332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.236886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.236908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.236916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.242497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.242522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.242530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.248132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.248153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.248161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.253614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.253635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.253643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.259125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.259147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.259155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.264606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.264628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.264636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.270092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.270114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.270122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.275617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.275639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.275647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.280990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.281011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.281019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.286306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.286328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.286336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.291651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.291673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.291681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.296995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.297016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.297024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.302469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.302490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.302498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.307997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.308018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.308025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.313544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.313565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.313574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.319143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.319165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.319173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.324598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.324619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.324627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.330038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.330060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.330068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.601 [2024-07-15 08:03:01.335526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16710b0) 00:27:16.601 [2024-07-15 08:03:01.335548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-15 08:03:01.335559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.601 00:27:16.601 Latency(us) 00:27:16.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.601 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:16.601 nvme0n1 : 2.00 5418.11 677.26 0.00 0.00 2950.27 644.67 8548.17 00:27:16.601 =================================================================================================================== 00:27:16.601 Total : 5418.11 677.26 0.00 0.00 2950.27 644.67 8548.17 00:27:16.601 0 00:27:16.859 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:16.859 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:16.859 | .driver_specific 00:27:16.859 | .nvme_error 00:27:16.859 | .status_code 00:27:16.859 | .command_transient_transport_error' 00:27:16.859 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:16.859 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:16.859 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 349 > 0 )) 00:27:16.859 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3401751 00:27:16.859 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3401751 ']' 00:27:16.859 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3401751 00:27:16.859 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:16.859 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:16.859 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3401751 00:27:16.859 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:16.859 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:16.859 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3401751' 00:27:16.859 killing process with pid 3401751 00:27:16.859 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3401751 00:27:16.859 Received shutdown signal, test time was about 2.000000 seconds 00:27:16.859 00:27:16.859 Latency(us) 00:27:16.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.859 =================================================================================================================== 00:27:16.859 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:16.859 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3401751 00:27:17.117 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:17.117 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:17.117 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:17.117 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:17.117 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:17.117 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3402551 00:27:17.117 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3402551 /var/tmp/bperf.sock 00:27:17.117 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:17.117 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3402551 ']' 00:27:17.117 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:17.117 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:17.117 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:17.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:17.117 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:17.117 08:03:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:17.117 [2024-07-15 08:03:01.814689] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:27:17.117 [2024-07-15 08:03:01.814736] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3402551 ] 00:27:17.117 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.375 [2024-07-15 08:03:01.881435] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.375 [2024-07-15 08:03:01.960754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.943 08:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:17.943 08:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:17.943 08:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:17.943 08:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:18.201 08:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:18.201 08:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.201 08:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:18.201 08:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.201 08:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:18.201 08:03:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:18.460 nvme0n1 00:27:18.460 08:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:18.460 08:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.460 08:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:18.460 08:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.460 08:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:18.460 08:03:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:18.460 Running I/O for 2 seconds... 00:27:18.460 [2024-07-15 08:03:03.192315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.460 [2024-07-15 08:03:03.192497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.460 [2024-07-15 08:03:03.192529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.460 [2024-07-15 08:03:03.202038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.460 [2024-07-15 08:03:03.202213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.460 [2024-07-15 08:03:03.202241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.460 [2024-07-15 08:03:03.211857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.460 [2024-07-15 08:03:03.212027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.460 [2024-07-15 08:03:03.212045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.717 [2024-07-15 08:03:03.221649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.717 [2024-07-15 08:03:03.221825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.717 [2024-07-15 08:03:03.221844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.717 [2024-07-15 08:03:03.231302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.717 [2024-07-15 08:03:03.231471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.717 [2024-07-15 08:03:03.231489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.717 [2024-07-15 08:03:03.241087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.717 [2024-07-15 08:03:03.241254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.717 [2024-07-15 08:03:03.241273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.717 [2024-07-15 08:03:03.250572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.717 [2024-07-15 08:03:03.250743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.717 [2024-07-15 08:03:03.250761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.717 [2024-07-15 08:03:03.260058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.717 [2024-07-15 08:03:03.260230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.717 [2024-07-15 08:03:03.260248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.717 [2024-07-15 08:03:03.269553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.717 [2024-07-15 08:03:03.269717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.717 [2024-07-15 08:03:03.269734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.717 [2024-07-15 08:03:03.279011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.717 [2024-07-15 08:03:03.279181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.717 [2024-07-15 08:03:03.279199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.717 [2024-07-15 08:03:03.288525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.717 [2024-07-15 08:03:03.288692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.717 [2024-07-15 08:03:03.288709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.717 [2024-07-15 08:03:03.298002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.717 [2024-07-15 08:03:03.298169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.717 [2024-07-15 08:03:03.298188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.717 [2024-07-15 08:03:03.307462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.717 [2024-07-15 08:03:03.307629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.717 [2024-07-15 08:03:03.307646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.717 [2024-07-15 08:03:03.316950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.717 [2024-07-15 08:03:03.317113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.717 [2024-07-15 08:03:03.317131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.717 [2024-07-15 08:03:03.326499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.717 [2024-07-15 08:03:03.326663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.717 [2024-07-15 08:03:03.326681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.717 [2024-07-15 08:03:03.335951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.717 [2024-07-15 08:03:03.336115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.717 [2024-07-15 08:03:03.336132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.717 [2024-07-15 08:03:03.345483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.717 [2024-07-15 08:03:03.345648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.717 [2024-07-15 08:03:03.345665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.717 [2024-07-15 08:03:03.354935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.717 [2024-07-15 08:03:03.355099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.717 [2024-07-15 08:03:03.355116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.717 [2024-07-15 08:03:03.364412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.717 [2024-07-15 08:03:03.364577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.717 [2024-07-15 08:03:03.364594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.717 [2024-07-15 08:03:03.373878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.717 [2024-07-15 08:03:03.374042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.717 [2024-07-15 08:03:03.374059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.717 [2024-07-15 08:03:03.383327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.717 [2024-07-15 08:03:03.383494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.717 [2024-07-15 08:03:03.383512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.717 [2024-07-15 08:03:03.392787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.717 [2024-07-15 08:03:03.392953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.717 [2024-07-15 08:03:03.392970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.717 [2024-07-15 08:03:03.402252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.717 [2024-07-15 08:03:03.402416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.717 [2024-07-15 08:03:03.402433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.717 [2024-07-15 08:03:03.411702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.717 [2024-07-15 08:03:03.411869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.717 [2024-07-15 08:03:03.411887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.717 [2024-07-15 08:03:03.421162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.717 [2024-07-15 08:03:03.421333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.717 [2024-07-15 08:03:03.421351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.717 [2024-07-15 08:03:03.430678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.717 [2024-07-15 08:03:03.430857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.717 [2024-07-15 08:03:03.430876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.717 [2024-07-15 08:03:03.440333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.717 [2024-07-15 08:03:03.440500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.717 [2024-07-15 08:03:03.440519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.717 [2024-07-15 08:03:03.449791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.718 [2024-07-15 08:03:03.449955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.718 [2024-07-15 08:03:03.449972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.718 [2024-07-15 08:03:03.459422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.718 [2024-07-15 08:03:03.459588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.718 [2024-07-15 08:03:03.459606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.468933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.469106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.469124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.478546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.478710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.478727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.487974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.488138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.488155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.497525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.497690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.497707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.506963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.507126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.507143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.516418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.516581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.516599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.525886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.526051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.526071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.535429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.535594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.535611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.544974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.545137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.545155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.554403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.554567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.554584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.563844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.564007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.564024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.573287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.573452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.573470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.582744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.582907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.582925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.592407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.592575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.592593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.602161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.602338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.602356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.611872] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.612042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.612060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.621639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.621809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.621827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.631339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.631506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.631523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.640960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.641129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.641147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.650587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.650752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.650770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.660083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.660248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.660282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.669658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.669823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.669841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.679177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.679351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.679369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.688631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.688795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.688813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.698279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.698446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.698463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.707857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.708027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.708046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.975 [2024-07-15 08:03:03.717585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:18.975 [2024-07-15 08:03:03.717755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.975 [2024-07-15 08:03:03.717775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.233 [2024-07-15 08:03:03.727367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.233 [2024-07-15 08:03:03.727537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.233 [2024-07-15 08:03:03.727555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.233 [2024-07-15 08:03:03.737089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.737270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.737289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.746706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.746873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.746890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.756178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.756351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.756369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.765638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.765807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.765824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.775102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.775269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.775293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.784657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.784822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.784841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.794156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.794326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.794344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.803624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.803788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.803805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.813059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.813229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.813247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.822524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.822689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.822707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.831998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.832163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.832181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.841474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.841639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.841657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.850939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.851105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.851122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.860393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.860558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.860578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.869829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.869997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.870015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.879295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.879459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.879477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.888770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.888932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.888950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.898242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.898407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.898424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.907706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.907872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.907889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.917133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.917304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.917322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.926629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.926794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.926812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.936067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.936238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.936256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.945685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.945855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.945872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.955157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.955330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.955347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.964620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.964782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.964799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.974235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.974401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.974419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.234 [2024-07-15 08:03:03.983772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.234 [2024-07-15 08:03:03.983934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.234 [2024-07-15 08:03:03.983952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:03.993368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:03.993533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:03.993551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.002813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.002978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.002996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.012271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.012436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.012453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.021699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.021865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.021883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.031202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.031375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.031393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.040731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.040894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.040912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.050176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.050347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.050364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.059642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.059807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.059825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.069097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.069267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.069284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.078568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.078733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.078750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.088005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.088169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.088186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.097488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.097653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.097670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.106955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.107120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.107141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.116407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.116572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.116592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.125856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.126022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.126040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.135373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.135536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.135554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.144852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.145018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.145037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.154342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.154506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.154524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.163794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.163961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.163978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.173234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.173398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.173416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.182692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.182860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.182878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.192267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.192440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.192458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.201977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.202146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.202163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.211711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.211878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.211896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.221457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.221625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.221643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.231230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.231398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.231416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.493 [2024-07-15 08:03:04.241159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.493 [2024-07-15 08:03:04.241339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.493 [2024-07-15 08:03:04.241357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.752 [2024-07-15 08:03:04.250906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.752 [2024-07-15 08:03:04.251077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.752 [2024-07-15 08:03:04.251095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.752 [2024-07-15 08:03:04.260782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.752 [2024-07-15 08:03:04.260952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.752 [2024-07-15 08:03:04.260970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.752 [2024-07-15 08:03:04.270519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.752 [2024-07-15 08:03:04.270692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.752 [2024-07-15 08:03:04.270710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.752 [2024-07-15 08:03:04.280256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.752 [2024-07-15 08:03:04.280425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.752 [2024-07-15 08:03:04.280443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.752 [2024-07-15 08:03:04.290039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.752 [2024-07-15 08:03:04.290209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.752 [2024-07-15 08:03:04.290232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.753 [2024-07-15 08:03:04.299783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.753 [2024-07-15 08:03:04.299952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.753 [2024-07-15 08:03:04.299970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.753 [2024-07-15 08:03:04.309537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.753 [2024-07-15 08:03:04.309704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.753 [2024-07-15 08:03:04.309722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.753 [2024-07-15 08:03:04.319283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.753 [2024-07-15 08:03:04.319453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.753 [2024-07-15 08:03:04.319471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.753 [2024-07-15 08:03:04.328924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.753 [2024-07-15 08:03:04.329089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.753 [2024-07-15 08:03:04.329107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.753 [2024-07-15 08:03:04.338554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.753 [2024-07-15 08:03:04.338721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.753 [2024-07-15 08:03:04.338739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.753 [2024-07-15 08:03:04.348123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.753 [2024-07-15 08:03:04.348294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.753 [2024-07-15 08:03:04.348312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.753 [2024-07-15 08:03:04.357604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.753 [2024-07-15 08:03:04.357767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.753 [2024-07-15 08:03:04.357784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.753 [2024-07-15 08:03:04.367154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.753 [2024-07-15 08:03:04.367325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.753 [2024-07-15 08:03:04.367343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.753 [2024-07-15 08:03:04.376590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.753 [2024-07-15 08:03:04.376753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.753 [2024-07-15 08:03:04.376771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.753 [2024-07-15 08:03:04.386056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.753 [2024-07-15 08:03:04.386220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.753 [2024-07-15 08:03:04.386243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.753 [2024-07-15 08:03:04.395552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.753 [2024-07-15 08:03:04.395717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.753 [2024-07-15 08:03:04.395735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.753 [2024-07-15 08:03:04.405018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.753 [2024-07-15 08:03:04.405183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.753 [2024-07-15 08:03:04.405201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.753 [2024-07-15 08:03:04.414482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.753 [2024-07-15 08:03:04.414648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.753 [2024-07-15 08:03:04.414665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.753 [2024-07-15 08:03:04.423980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.753 [2024-07-15 08:03:04.424144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.753 [2024-07-15 08:03:04.424162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.753 [2024-07-15 08:03:04.433410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.753 [2024-07-15 08:03:04.433576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.753 [2024-07-15 08:03:04.433595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.753 [2024-07-15 08:03:04.442903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.753 [2024-07-15 08:03:04.443066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.753 [2024-07-15 08:03:04.443086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.753 [2024-07-15 08:03:04.452333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.753 [2024-07-15 08:03:04.452499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.753 [2024-07-15 08:03:04.452517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.753 [2024-07-15 08:03:04.461804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.753 [2024-07-15 08:03:04.461968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.753 [2024-07-15 08:03:04.461985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.753 [2024-07-15 08:03:04.471301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.753 [2024-07-15 08:03:04.471468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.753 [2024-07-15 08:03:04.471485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.753 [2024-07-15 08:03:04.480751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.753 [2024-07-15 08:03:04.480917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.753 [2024-07-15 08:03:04.480934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.753 [2024-07-15 08:03:04.490341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.753 [2024-07-15 08:03:04.490507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.753 [2024-07-15 08:03:04.490525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.753 [2024-07-15 08:03:04.499820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:19.753 [2024-07-15 08:03:04.499985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.753 [2024-07-15 08:03:04.500003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.509552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.509717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.509734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.519038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.519204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.519222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.528503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.528674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.528691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.537948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.538113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.538131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.547456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.547621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.547638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.556906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.557071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.557088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.566376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.566542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.566560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.575873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.576037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.576054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.585334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.585497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.585515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.594874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.595040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.595058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.604336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.604505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.604523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.613796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.613959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.613977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.623279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.623444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.623461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.632773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.632940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.632956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.642268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.642433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.642450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.651722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.651884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.651901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.661158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.661332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.661349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.670628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.670791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.670808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.680058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.680223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.680245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.689543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.689709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.689729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.699061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.699229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.699246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.708500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.708663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.708681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.718172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.718349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.718368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.727910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.728079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.728097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.737627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.737792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.737810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.747232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.747400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.747419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.013 [2024-07-15 08:03:04.756677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.013 [2024-07-15 08:03:04.756843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.013 [2024-07-15 08:03:04.756860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.273 [2024-07-15 08:03:04.766266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.273 [2024-07-15 08:03:04.766431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.273 [2024-07-15 08:03:04.766449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.273 [2024-07-15 08:03:04.775806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.273 [2024-07-15 08:03:04.775977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.273 [2024-07-15 08:03:04.775994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.273 [2024-07-15 08:03:04.785331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.273 [2024-07-15 08:03:04.785494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.273 [2024-07-15 08:03:04.785512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.273 [2024-07-15 08:03:04.794801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.273 [2024-07-15 08:03:04.794966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.273 [2024-07-15 08:03:04.794984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.273 [2024-07-15 08:03:04.804496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.273 [2024-07-15 08:03:04.804659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.273 [2024-07-15 08:03:04.804677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.273 [2024-07-15 08:03:04.813930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.273 [2024-07-15 08:03:04.814097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.273 [2024-07-15 08:03:04.814114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.273 [2024-07-15 08:03:04.823398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.273 [2024-07-15 08:03:04.823563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.273 [2024-07-15 08:03:04.823581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.273 [2024-07-15 08:03:04.832856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.273 [2024-07-15 08:03:04.833022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.273 [2024-07-15 08:03:04.833039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.273 [2024-07-15 08:03:04.842283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.273 [2024-07-15 08:03:04.842449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.273 [2024-07-15 08:03:04.842466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.273 [2024-07-15 08:03:04.851786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.273 [2024-07-15 08:03:04.851949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.273 [2024-07-15 08:03:04.851966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.273 [2024-07-15 08:03:04.861209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.273 [2024-07-15 08:03:04.861385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.273 [2024-07-15 08:03:04.861403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.273 [2024-07-15 08:03:04.870728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.273 [2024-07-15 08:03:04.870895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.274 [2024-07-15 08:03:04.870912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.274 [2024-07-15 08:03:04.880163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.274 [2024-07-15 08:03:04.880351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.274 [2024-07-15 08:03:04.880368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.274 [2024-07-15 08:03:04.889675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.274 [2024-07-15 08:03:04.889841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.274 [2024-07-15 08:03:04.889858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.274 [2024-07-15 08:03:04.899178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.274 [2024-07-15 08:03:04.899351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.274 [2024-07-15 08:03:04.899368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.274 [2024-07-15 08:03:04.908706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.274 [2024-07-15 08:03:04.908873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.274 [2024-07-15 08:03:04.908890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.274 [2024-07-15 08:03:04.918160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.274 [2024-07-15 08:03:04.918333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.274 [2024-07-15 08:03:04.918352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.274 [2024-07-15 08:03:04.927650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.274 [2024-07-15 08:03:04.927815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.274 [2024-07-15 08:03:04.927832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.274 [2024-07-15 08:03:04.937093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.274 [2024-07-15 08:03:04.937260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.274 [2024-07-15 08:03:04.937277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.274 [2024-07-15 08:03:04.946561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.274 [2024-07-15 08:03:04.946724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.274 [2024-07-15 08:03:04.946741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.274 [2024-07-15 08:03:04.956138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.274 [2024-07-15 08:03:04.956308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.274 [2024-07-15 08:03:04.956327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.274 [2024-07-15 08:03:04.965580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.274 [2024-07-15 08:03:04.965747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.274 [2024-07-15 08:03:04.965765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.274 [2024-07-15 08:03:04.975056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.274 [2024-07-15 08:03:04.975221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.274 [2024-07-15 08:03:04.975243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.274 [2024-07-15 08:03:04.984542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.274 [2024-07-15 08:03:04.984710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.274 [2024-07-15 08:03:04.984727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.274 [2024-07-15 08:03:04.994041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.274 [2024-07-15 08:03:04.994205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.274 [2024-07-15 08:03:04.994222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.274 [2024-07-15 08:03:05.003656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.274 [2024-07-15 08:03:05.003821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.274 [2024-07-15 08:03:05.003838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.274 [2024-07-15 08:03:05.013091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.274 [2024-07-15 08:03:05.013257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.274 [2024-07-15 08:03:05.013275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.274 [2024-07-15 08:03:05.022590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.274 [2024-07-15 08:03:05.022756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.274 [2024-07-15 08:03:05.022778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.533 [2024-07-15 08:03:05.032223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.533 [2024-07-15 08:03:05.032401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.533 [2024-07-15 08:03:05.032418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.533 [2024-07-15 08:03:05.041717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.533 [2024-07-15 08:03:05.041881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.533 [2024-07-15 08:03:05.041898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.533 [2024-07-15 08:03:05.051194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.533 [2024-07-15 08:03:05.051366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.534 [2024-07-15 08:03:05.051386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.534 [2024-07-15 08:03:05.060656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.534 [2024-07-15 08:03:05.060823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.534 [2024-07-15 08:03:05.060840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.534 [2024-07-15 08:03:05.070100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.534 [2024-07-15 08:03:05.070272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.534 [2024-07-15 08:03:05.070289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.534 [2024-07-15 08:03:05.079552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.534 [2024-07-15 08:03:05.079714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.534 [2024-07-15 08:03:05.079732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.534 [2024-07-15 08:03:05.089019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.534 [2024-07-15 08:03:05.089187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.534 [2024-07-15 08:03:05.089204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.534 [2024-07-15 08:03:05.098551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.534 [2024-07-15 08:03:05.098715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.534 [2024-07-15 08:03:05.098731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.534 [2024-07-15 08:03:05.108007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.534 [2024-07-15 08:03:05.108178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.534 [2024-07-15 08:03:05.108195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.534 [2024-07-15 08:03:05.117450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.534 [2024-07-15 08:03:05.117618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.534 [2024-07-15 08:03:05.117636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.534 [2024-07-15 08:03:05.126928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.534 [2024-07-15 08:03:05.127092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.534 [2024-07-15 08:03:05.127109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.534 [2024-07-15 08:03:05.136382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.534 [2024-07-15 08:03:05.136545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.534 [2024-07-15 08:03:05.136562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.534 [2024-07-15 08:03:05.145816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.534 [2024-07-15 08:03:05.145981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.534 [2024-07-15 08:03:05.145998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.534 [2024-07-15 08:03:05.155291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.534 [2024-07-15 08:03:05.155454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.534 [2024-07-15 08:03:05.155471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.534 [2024-07-15 08:03:05.164723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.534 [2024-07-15 08:03:05.164887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.534 [2024-07-15 08:03:05.164905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.534 [2024-07-15 08:03:05.174150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14694d0) with pdu=0x2000190fd640 00:27:20.534 [2024-07-15 08:03:05.174322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.534 [2024-07-15 08:03:05.174340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.534 00:27:20.534 Latency(us) 00:27:20.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:20.534 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:20.534 nvme0n1 : 2.00 26709.79 104.34 0.00 0.00 4783.99 4559.03 13791.05 00:27:20.534 =================================================================================================================== 00:27:20.534 Total : 26709.79 104.34 0.00 0.00 4783.99 4559.03 13791.05 00:27:20.534 0 00:27:20.534 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:20.534 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:20.534 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:20.534 | .driver_specific 00:27:20.534 | .nvme_error 00:27:20.534 | .status_code 00:27:20.534 | .command_transient_transport_error' 00:27:20.534 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:20.793 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 209 > 0 )) 00:27:20.793 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3402551 00:27:20.793 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3402551 ']' 00:27:20.793 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3402551 00:27:20.793 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:20.793 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:20.793 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3402551 00:27:20.793 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:20.793 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:20.793 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3402551' 00:27:20.793 killing process with pid 3402551 00:27:20.793 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3402551 00:27:20.793 Received shutdown signal, test time was about 2.000000 seconds 00:27:20.793 00:27:20.793 Latency(us) 00:27:20.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:20.793 =================================================================================================================== 00:27:20.793 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:20.793 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3402551 00:27:21.052 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:21.052 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:21.052 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:21.052 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:21.052 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:21.052 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3403058 00:27:21.052 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3403058 /var/tmp/bperf.sock 00:27:21.052 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:21.052 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3403058 ']' 00:27:21.052 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:21.052 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:21.052 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:21.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:21.052 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:21.052 08:03:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:21.052 [2024-07-15 08:03:05.673208] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:27:21.052 [2024-07-15 08:03:05.673263] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3403058 ] 00:27:21.052 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:21.052 Zero copy mechanism will not be used. 00:27:21.052 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.052 [2024-07-15 08:03:05.742239] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.311 [2024-07-15 08:03:05.821985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.875 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:21.875 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:21.875 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:21.875 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:22.134 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:22.134 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.134 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:22.134 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.134 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:22.134 08:03:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:22.393 nvme0n1 00:27:22.393 08:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:22.393 08:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.393 08:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:22.393 08:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.393 08:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:22.393 08:03:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:22.393 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:22.393 Zero copy mechanism will not be used. 00:27:22.393 Running I/O for 2 seconds... 00:27:22.393 [2024-07-15 08:03:07.102977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.393 [2024-07-15 08:03:07.103353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.393 [2024-07-15 08:03:07.103381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.393 [2024-07-15 08:03:07.108060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.393 [2024-07-15 08:03:07.108443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.393 [2024-07-15 08:03:07.108467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.393 [2024-07-15 08:03:07.113053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.393 [2024-07-15 08:03:07.113433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.393 [2024-07-15 08:03:07.113456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.393 [2024-07-15 08:03:07.118173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.393 [2024-07-15 08:03:07.118545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.393 [2024-07-15 08:03:07.118566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.393 [2024-07-15 08:03:07.123417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.393 [2024-07-15 08:03:07.123783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.393 [2024-07-15 08:03:07.123805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.393 [2024-07-15 08:03:07.128539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.393 [2024-07-15 08:03:07.128906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.393 [2024-07-15 08:03:07.128927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.393 [2024-07-15 08:03:07.133518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.393 [2024-07-15 08:03:07.133871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.393 [2024-07-15 08:03:07.133891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.393 [2024-07-15 08:03:07.138517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.393 [2024-07-15 08:03:07.138907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.393 [2024-07-15 08:03:07.138928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.393 [2024-07-15 08:03:07.143461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.393 [2024-07-15 08:03:07.143831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.393 [2024-07-15 08:03:07.143851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.653 [2024-07-15 08:03:07.148550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.653 [2024-07-15 08:03:07.148920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.653 [2024-07-15 08:03:07.148941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.653 [2024-07-15 08:03:07.153537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.653 [2024-07-15 08:03:07.153902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.653 [2024-07-15 08:03:07.153926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.653 [2024-07-15 08:03:07.158435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.653 [2024-07-15 08:03:07.158792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.653 [2024-07-15 08:03:07.158812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.653 [2024-07-15 08:03:07.164345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.653 [2024-07-15 08:03:07.164717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.653 [2024-07-15 08:03:07.164737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.653 [2024-07-15 08:03:07.169356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.653 [2024-07-15 08:03:07.169738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.653 [2024-07-15 08:03:07.169758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.653 [2024-07-15 08:03:07.174298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.653 [2024-07-15 08:03:07.174643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.653 [2024-07-15 08:03:07.174663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.653 [2024-07-15 08:03:07.179058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.653 [2024-07-15 08:03:07.179428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.653 [2024-07-15 08:03:07.179448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.653 [2024-07-15 08:03:07.183875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.653 [2024-07-15 08:03:07.184253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.653 [2024-07-15 08:03:07.184273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.653 [2024-07-15 08:03:07.188772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.653 [2024-07-15 08:03:07.189133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.653 [2024-07-15 08:03:07.189153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.653 [2024-07-15 08:03:07.193507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.653 [2024-07-15 08:03:07.193851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.653 [2024-07-15 08:03:07.193871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.653 [2024-07-15 08:03:07.198365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.653 [2024-07-15 08:03:07.198740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.653 [2024-07-15 08:03:07.198760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.653 [2024-07-15 08:03:07.203782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.653 [2024-07-15 08:03:07.204145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.653 [2024-07-15 08:03:07.204165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.653 [2024-07-15 08:03:07.210625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.210980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.211001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.217716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.218100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.218120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.224827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.225183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.225203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.232652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.233106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.233126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.240094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.240460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.240480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.247694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.248050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.248070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.255405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.255757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.255781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.263037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.263421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.263441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.270504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.270873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.270892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.277830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.278177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.278198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.284721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.285086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.285106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.291871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.292231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.292251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.299171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.299528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.299548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.306774] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.307144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.307164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.313549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.313996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.314018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.320951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.321314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.321335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.328141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.328531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.328551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.334423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.334788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.334808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.340174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.340538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.340558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.345382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.345751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.345771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.350258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.350628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.350649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.355247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.355598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.355618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.360925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.361294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.361316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.367957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.368340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.368361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.375852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.376221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.376247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.384005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.384395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.384416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.390800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.391164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.391184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.654 [2024-07-15 08:03:07.398740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.654 [2024-07-15 08:03:07.399096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.654 [2024-07-15 08:03:07.399116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.405881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.406115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.406135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.412110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.412510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.412530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.418293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.418630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.418649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.424427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.424770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.424791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.430360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.430705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.430728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.436310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.436653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.436673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.442706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.443035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.443055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.448499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.448848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.448867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.454688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.455075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.455096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.462014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.462443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.462463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.470482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.470913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.470933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.479559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.480000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.480019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.488038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.488429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.488448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.495583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.496041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.496062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.504124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.504546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.504566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.512118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.512543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.512563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.519868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.520277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.520296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.528010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.528439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.528458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.535804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.536239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.536259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.543448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.543871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.543889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.551370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.551805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.551825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.558660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.559067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.559086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.566328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.566796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.566815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.574307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.574678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.574697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.581641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.582077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.582097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.589514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.589885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.589905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.597148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.597532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.914 [2024-07-15 08:03:07.597552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.914 [2024-07-15 08:03:07.604611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.914 [2024-07-15 08:03:07.605009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.915 [2024-07-15 08:03:07.605029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.915 [2024-07-15 08:03:07.612108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.915 [2024-07-15 08:03:07.612552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.915 [2024-07-15 08:03:07.612572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.915 [2024-07-15 08:03:07.620176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.915 [2024-07-15 08:03:07.620592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.915 [2024-07-15 08:03:07.620612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.915 [2024-07-15 08:03:07.627502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.915 [2024-07-15 08:03:07.627898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.915 [2024-07-15 08:03:07.627922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.915 [2024-07-15 08:03:07.634912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.915 [2024-07-15 08:03:07.635357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.915 [2024-07-15 08:03:07.635376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.915 [2024-07-15 08:03:07.642425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.915 [2024-07-15 08:03:07.642884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.915 [2024-07-15 08:03:07.642905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.915 [2024-07-15 08:03:07.650154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.915 [2024-07-15 08:03:07.650627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.915 [2024-07-15 08:03:07.650647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.915 [2024-07-15 08:03:07.658473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:22.915 [2024-07-15 08:03:07.658894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.915 [2024-07-15 08:03:07.658914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.174 [2024-07-15 08:03:07.666346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.174 [2024-07-15 08:03:07.666822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.174 [2024-07-15 08:03:07.666842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.174 [2024-07-15 08:03:07.674014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.174 [2024-07-15 08:03:07.674421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.174 [2024-07-15 08:03:07.674441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.174 [2024-07-15 08:03:07.681902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.174 [2024-07-15 08:03:07.682353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.174 [2024-07-15 08:03:07.682373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.174 [2024-07-15 08:03:07.689069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.174 [2024-07-15 08:03:07.689426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.174 [2024-07-15 08:03:07.689446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.174 [2024-07-15 08:03:07.694592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.174 [2024-07-15 08:03:07.694986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.174 [2024-07-15 08:03:07.695005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.174 [2024-07-15 08:03:07.700493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.174 [2024-07-15 08:03:07.700823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.174 [2024-07-15 08:03:07.700842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.174 [2024-07-15 08:03:07.707115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.174 [2024-07-15 08:03:07.707560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.174 [2024-07-15 08:03:07.707580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.174 [2024-07-15 08:03:07.714290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.174 [2024-07-15 08:03:07.714705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.174 [2024-07-15 08:03:07.714724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.174 [2024-07-15 08:03:07.720368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.174 [2024-07-15 08:03:07.720748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.174 [2024-07-15 08:03:07.720768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.174 [2024-07-15 08:03:07.727160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.174 [2024-07-15 08:03:07.727532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.174 [2024-07-15 08:03:07.727552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.174 [2024-07-15 08:03:07.733136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.174 [2024-07-15 08:03:07.733466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.174 [2024-07-15 08:03:07.733486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.174 [2024-07-15 08:03:07.737869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.174 [2024-07-15 08:03:07.738214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.174 [2024-07-15 08:03:07.738239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.174 [2024-07-15 08:03:07.742625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.174 [2024-07-15 08:03:07.742969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.174 [2024-07-15 08:03:07.742993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.174 [2024-07-15 08:03:07.747263] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.174 [2024-07-15 08:03:07.747603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.174 [2024-07-15 08:03:07.747623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.174 [2024-07-15 08:03:07.751886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.174 [2024-07-15 08:03:07.752216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.174 [2024-07-15 08:03:07.752240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.174 [2024-07-15 08:03:07.756621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.174 [2024-07-15 08:03:07.756945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.174 [2024-07-15 08:03:07.756965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.174 [2024-07-15 08:03:07.762153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.174 [2024-07-15 08:03:07.762478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.174 [2024-07-15 08:03:07.762497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.174 [2024-07-15 08:03:07.767059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.174 [2024-07-15 08:03:07.767422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.174 [2024-07-15 08:03:07.767442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.174 [2024-07-15 08:03:07.772896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.174 [2024-07-15 08:03:07.773214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.174 [2024-07-15 08:03:07.773239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.174 [2024-07-15 08:03:07.778839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.174 [2024-07-15 08:03:07.779247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.174 [2024-07-15 08:03:07.779283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.174 [2024-07-15 08:03:07.785711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.174 [2024-07-15 08:03:07.786129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.174 [2024-07-15 08:03:07.786149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.174 [2024-07-15 08:03:07.792795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.175 [2024-07-15 08:03:07.793185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.175 [2024-07-15 08:03:07.793206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.175 [2024-07-15 08:03:07.799818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.175 [2024-07-15 08:03:07.800238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.175 [2024-07-15 08:03:07.800257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.175 [2024-07-15 08:03:07.806670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.175 [2024-07-15 08:03:07.807079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.175 [2024-07-15 08:03:07.807099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.175 [2024-07-15 08:03:07.813514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.175 [2024-07-15 08:03:07.813827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.175 [2024-07-15 08:03:07.813847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.175 [2024-07-15 08:03:07.818972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.175 [2024-07-15 08:03:07.819299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.175 [2024-07-15 08:03:07.819321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.175 [2024-07-15 08:03:07.823545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.175 [2024-07-15 08:03:07.823848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.175 [2024-07-15 08:03:07.823867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.175 [2024-07-15 08:03:07.828079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.175 [2024-07-15 08:03:07.828404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.175 [2024-07-15 08:03:07.828423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.175 [2024-07-15 08:03:07.832674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.175 [2024-07-15 08:03:07.832990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.175 [2024-07-15 08:03:07.833010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.175 [2024-07-15 08:03:07.837660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.175 [2024-07-15 08:03:07.837962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.175 [2024-07-15 08:03:07.837981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.175 [2024-07-15 08:03:07.843769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.175 [2024-07-15 08:03:07.844167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.175 [2024-07-15 08:03:07.844187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.175 [2024-07-15 08:03:07.850356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.175 [2024-07-15 08:03:07.850735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.175 [2024-07-15 08:03:07.850755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.175 [2024-07-15 08:03:07.856988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.175 [2024-07-15 08:03:07.857411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.175 [2024-07-15 08:03:07.857431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.175 [2024-07-15 08:03:07.864144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.175 [2024-07-15 08:03:07.864522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.175 [2024-07-15 08:03:07.864542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.175 [2024-07-15 08:03:07.871217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.175 [2024-07-15 08:03:07.871609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.175 [2024-07-15 08:03:07.871630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.175 [2024-07-15 08:03:07.878215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.175 [2024-07-15 08:03:07.878620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.175 [2024-07-15 08:03:07.878640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.175 [2024-07-15 08:03:07.885021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.175 [2024-07-15 08:03:07.885422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.175 [2024-07-15 08:03:07.885441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.175 [2024-07-15 08:03:07.891850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.175 [2024-07-15 08:03:07.892263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.175 [2024-07-15 08:03:07.892282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.175 [2024-07-15 08:03:07.898867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.175 [2024-07-15 08:03:07.899304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.175 [2024-07-15 08:03:07.899331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.175 [2024-07-15 08:03:07.905942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.175 [2024-07-15 08:03:07.906320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.175 [2024-07-15 08:03:07.906339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.175 [2024-07-15 08:03:07.913060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.175 [2024-07-15 08:03:07.913462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.175 [2024-07-15 08:03:07.913481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.175 [2024-07-15 08:03:07.919979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.175 [2024-07-15 08:03:07.920356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.175 [2024-07-15 08:03:07.920377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 08:03:07.926851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 08:03:07.927245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 08:03:07.927267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 08:03:07.934001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 08:03:07.934376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 08:03:07.934396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 08:03:07.940250] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 08:03:07.940622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 08:03:07.940642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 08:03:07.946945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 08:03:07.947344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 08:03:07.947364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 08:03:07.953869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 08:03:07.954268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 08:03:07.954287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 08:03:07.961262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 08:03:07.961645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 08:03:07.961665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 08:03:07.968345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 08:03:07.968760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 08:03:07.968779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 08:03:07.975094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 08:03:07.975485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 08:03:07.975504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 08:03:07.982416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 08:03:07.982807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 08:03:07.982827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 08:03:07.989332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 08:03:07.989674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 08:03:07.989694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 08:03:07.996209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 08:03:07.996645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 08:03:07.996666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 08:03:08.003218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 08:03:08.003628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 08:03:08.003647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 08:03:08.010171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 08:03:08.010570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 08:03:08.010590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 08:03:08.017150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 08:03:08.017572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 08:03:08.017591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 08:03:08.025622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 08:03:08.026060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 08:03:08.026079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 08:03:08.033032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 08:03:08.033452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 08:03:08.033471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 08:03:08.039703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 08:03:08.040108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.480 [2024-07-15 08:03:08.040127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.480 [2024-07-15 08:03:08.046354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.480 [2024-07-15 08:03:08.046789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.046810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.053054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.053480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.053499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.060173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.060565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.060585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.067223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.067628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.067647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.074566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.074903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.074923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.081507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.081899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.081923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.088011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.088356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.088376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.093253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.093571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.093591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.098809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.099114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.099150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.103379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.103704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.103722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.107962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.108286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.108306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.112526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.112847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.112867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.117417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.117740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.117759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.122594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.122918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.122939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.127238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.127557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.127578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.131793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.132096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.132115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.136275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.136588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.136608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.140786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.141100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.141120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.145328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.145650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.145669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.150623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.150971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.150992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.157078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.157481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.157501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.163166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.163502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.163521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.169689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.170115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.170134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.176963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.177332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.177352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.183529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.183860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.183879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.188544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.188865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.188885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.193099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.193420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.193440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.197615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.197937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.197956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.202138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.481 [2024-07-15 08:03:08.202449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.481 [2024-07-15 08:03:08.202469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.481 [2024-07-15 08:03:08.206612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.482 [2024-07-15 08:03:08.206930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.482 [2024-07-15 08:03:08.206949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.482 [2024-07-15 08:03:08.211102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.482 [2024-07-15 08:03:08.211417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.482 [2024-07-15 08:03:08.211436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.482 [2024-07-15 08:03:08.215554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.482 [2024-07-15 08:03:08.215864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.482 [2024-07-15 08:03:08.215887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.482 [2024-07-15 08:03:08.219960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.482 [2024-07-15 08:03:08.220281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.482 [2024-07-15 08:03:08.220301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.482 [2024-07-15 08:03:08.224384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.482 [2024-07-15 08:03:08.224698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.482 [2024-07-15 08:03:08.224718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.482 [2024-07-15 08:03:08.228864] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.482 [2024-07-15 08:03:08.229184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.482 [2024-07-15 08:03:08.229204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.740 [2024-07-15 08:03:08.233582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.740 [2024-07-15 08:03:08.233899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.740 [2024-07-15 08:03:08.233918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.740 [2024-07-15 08:03:08.238079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.740 [2024-07-15 08:03:08.238403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.740 [2024-07-15 08:03:08.238422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.740 [2024-07-15 08:03:08.242509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.740 [2024-07-15 08:03:08.242821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.740 [2024-07-15 08:03:08.242841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.740 [2024-07-15 08:03:08.246894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.740 [2024-07-15 08:03:08.247200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.740 [2024-07-15 08:03:08.247220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.740 [2024-07-15 08:03:08.252018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.740 [2024-07-15 08:03:08.252337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.740 [2024-07-15 08:03:08.252357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.740 [2024-07-15 08:03:08.258910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.740 [2024-07-15 08:03:08.259325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.740 [2024-07-15 08:03:08.259345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.740 [2024-07-15 08:03:08.265243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.740 [2024-07-15 08:03:08.265600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.740 [2024-07-15 08:03:08.265620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.740 [2024-07-15 08:03:08.273011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.740 [2024-07-15 08:03:08.273439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.740 [2024-07-15 08:03:08.273458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.740 [2024-07-15 08:03:08.280965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.740 [2024-07-15 08:03:08.281394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.740 [2024-07-15 08:03:08.281414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.740 [2024-07-15 08:03:08.289481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.740 [2024-07-15 08:03:08.289926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.740 [2024-07-15 08:03:08.289946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.740 [2024-07-15 08:03:08.296905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.740 [2024-07-15 08:03:08.297315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.740 [2024-07-15 08:03:08.297334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.740 [2024-07-15 08:03:08.304775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.740 [2024-07-15 08:03:08.305083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.740 [2024-07-15 08:03:08.305102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.740 [2024-07-15 08:03:08.312649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 08:03:08.313050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 08:03:08.313070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 08:03:08.320731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 08:03:08.321101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 08:03:08.321125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 08:03:08.328891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 08:03:08.329256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 08:03:08.329275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 08:03:08.336838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 08:03:08.337263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 08:03:08.337283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 08:03:08.345565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 08:03:08.345917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 08:03:08.345937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 08:03:08.353177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 08:03:08.353609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 08:03:08.353629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 08:03:08.361244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 08:03:08.361628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 08:03:08.361649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 08:03:08.368646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 08:03:08.369019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 08:03:08.369039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 08:03:08.376393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 08:03:08.376817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 08:03:08.376838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 08:03:08.384199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 08:03:08.384582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 08:03:08.384603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 08:03:08.392331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 08:03:08.392794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 08:03:08.392814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 08:03:08.400279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 08:03:08.400692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 08:03:08.400712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 08:03:08.407978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 08:03:08.408432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 08:03:08.408451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 08:03:08.416125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 08:03:08.416564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 08:03:08.416584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 08:03:08.423911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 08:03:08.424313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 08:03:08.424333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 08:03:08.430657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 08:03:08.431022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 08:03:08.431042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 08:03:08.438111] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 08:03:08.438552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 08:03:08.438573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 08:03:08.445516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 08:03:08.445958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 08:03:08.445978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 08:03:08.452611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 08:03:08.452975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 08:03:08.452994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 08:03:08.459797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 08:03:08.460187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 08:03:08.460207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 08:03:08.466586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 08:03:08.466965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 08:03:08.466985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 08:03:08.473613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 08:03:08.473990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 08:03:08.474010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 08:03:08.480598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 08:03:08.480969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 08:03:08.480990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.741 [2024-07-15 08:03:08.487458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:23.741 [2024-07-15 08:03:08.487899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.741 [2024-07-15 08:03:08.487919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.000 [2024-07-15 08:03:08.494452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.000 [2024-07-15 08:03:08.494844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.000 [2024-07-15 08:03:08.494864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.000 [2024-07-15 08:03:08.501377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.000 [2024-07-15 08:03:08.501739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.000 [2024-07-15 08:03:08.501758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.000 [2024-07-15 08:03:08.508017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.000 [2024-07-15 08:03:08.508427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.000 [2024-07-15 08:03:08.508447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.000 [2024-07-15 08:03:08.515175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.000 [2024-07-15 08:03:08.515569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.000 [2024-07-15 08:03:08.515592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.000 [2024-07-15 08:03:08.521505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.000 [2024-07-15 08:03:08.521828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.000 [2024-07-15 08:03:08.521849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.000 [2024-07-15 08:03:08.528113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.000 [2024-07-15 08:03:08.528534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.000 [2024-07-15 08:03:08.528556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.000 [2024-07-15 08:03:08.534704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.000 [2024-07-15 08:03:08.535048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.000 [2024-07-15 08:03:08.535068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.000 [2024-07-15 08:03:08.541418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.000 [2024-07-15 08:03:08.541795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.000 [2024-07-15 08:03:08.541815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.000 [2024-07-15 08:03:08.548136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.000 [2024-07-15 08:03:08.548595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.000 [2024-07-15 08:03:08.548615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.000 [2024-07-15 08:03:08.555061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.000 [2024-07-15 08:03:08.555471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.000 [2024-07-15 08:03:08.555491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.000 [2024-07-15 08:03:08.562160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.000 [2024-07-15 08:03:08.562538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.000 [2024-07-15 08:03:08.562558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.000 [2024-07-15 08:03:08.569821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.000 [2024-07-15 08:03:08.570183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.570202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.578065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.578523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.578543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.585201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.585532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.585553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.590251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.590564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.590584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.595730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.596070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.596090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.601435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.601761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.601781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.607002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.607341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.607362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.612810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.613138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.613157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.618647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.618978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.618999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.624618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.624944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.624966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.630297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.630634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.630655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.635934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.636278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.636298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.641742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.642093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.642113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.647289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.647619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.647641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.652902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.653281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.653301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.658067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.658413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.658434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.662870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.663200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.663221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.667588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.667914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.667935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.672280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.672613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.672637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.676963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.677297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.677316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.681589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.681913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.681934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.686143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.686455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.686475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.690678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.691006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.691025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.695350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.695677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.695696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.700097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.700423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.700443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.704819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.705128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.705149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.709468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.709793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.709813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.714028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.714368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.714388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.718504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.001 [2024-07-15 08:03:08.718811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.001 [2024-07-15 08:03:08.718831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.001 [2024-07-15 08:03:08.722939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.002 [2024-07-15 08:03:08.723277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.002 [2024-07-15 08:03:08.723298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.002 [2024-07-15 08:03:08.727466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.002 [2024-07-15 08:03:08.727784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.002 [2024-07-15 08:03:08.727803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.002 [2024-07-15 08:03:08.731901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.002 [2024-07-15 08:03:08.732207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.002 [2024-07-15 08:03:08.732234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.002 [2024-07-15 08:03:08.736321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.002 [2024-07-15 08:03:08.736642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.002 [2024-07-15 08:03:08.736662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.002 [2024-07-15 08:03:08.740887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.002 [2024-07-15 08:03:08.741204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.002 [2024-07-15 08:03:08.741230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.002 [2024-07-15 08:03:08.745881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.002 [2024-07-15 08:03:08.746188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.002 [2024-07-15 08:03:08.746207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.002 [2024-07-15 08:03:08.750737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.002 [2024-07-15 08:03:08.751061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.002 [2024-07-15 08:03:08.751085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.261 [2024-07-15 08:03:08.755502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.261 [2024-07-15 08:03:08.755823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.261 [2024-07-15 08:03:08.755843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.261 [2024-07-15 08:03:08.760322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.261 [2024-07-15 08:03:08.760635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.261 [2024-07-15 08:03:08.760654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.261 [2024-07-15 08:03:08.764926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.261 [2024-07-15 08:03:08.765238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.261 [2024-07-15 08:03:08.765257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.261 [2024-07-15 08:03:08.769746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.261 [2024-07-15 08:03:08.770058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.261 [2024-07-15 08:03:08.770078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.261 [2024-07-15 08:03:08.774182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.261 [2024-07-15 08:03:08.774506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.261 [2024-07-15 08:03:08.774525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.261 [2024-07-15 08:03:08.778664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.261 [2024-07-15 08:03:08.778964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.261 [2024-07-15 08:03:08.778984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.261 [2024-07-15 08:03:08.783040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.261 [2024-07-15 08:03:08.783362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.261 [2024-07-15 08:03:08.783382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.261 [2024-07-15 08:03:08.787783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.261 [2024-07-15 08:03:08.788093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.261 [2024-07-15 08:03:08.788112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.261 [2024-07-15 08:03:08.792461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.261 [2024-07-15 08:03:08.792773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.261 [2024-07-15 08:03:08.792793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.261 [2024-07-15 08:03:08.797469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.261 [2024-07-15 08:03:08.797785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.261 [2024-07-15 08:03:08.797804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.261 [2024-07-15 08:03:08.802334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.261 [2024-07-15 08:03:08.802646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.261 [2024-07-15 08:03:08.802666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.261 [2024-07-15 08:03:08.806815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.261 [2024-07-15 08:03:08.807125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.261 [2024-07-15 08:03:08.807145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.261 [2024-07-15 08:03:08.811508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.261 [2024-07-15 08:03:08.811817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.261 [2024-07-15 08:03:08.811837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.261 [2024-07-15 08:03:08.816234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.261 [2024-07-15 08:03:08.816557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.261 [2024-07-15 08:03:08.816577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.261 [2024-07-15 08:03:08.820878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.261 [2024-07-15 08:03:08.821195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.261 [2024-07-15 08:03:08.821216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.261 [2024-07-15 08:03:08.825462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.261 [2024-07-15 08:03:08.825774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.261 [2024-07-15 08:03:08.825795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.261 [2024-07-15 08:03:08.830056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.261 [2024-07-15 08:03:08.830368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.261 [2024-07-15 08:03:08.830389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.261 [2024-07-15 08:03:08.834667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.261 [2024-07-15 08:03:08.834978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.261 [2024-07-15 08:03:08.834998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.261 [2024-07-15 08:03:08.839248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.261 [2024-07-15 08:03:08.839571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.261 [2024-07-15 08:03:08.839591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.261 [2024-07-15 08:03:08.843712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.261 [2024-07-15 08:03:08.844034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.261 [2024-07-15 08:03:08.844054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.261 [2024-07-15 08:03:08.848180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.261 [2024-07-15 08:03:08.848498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.261 [2024-07-15 08:03:08.848518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.261 [2024-07-15 08:03:08.852647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:08.852970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:08.852990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:08.857073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:08.857392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:08.857412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:08.861477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:08.861791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:08.861811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:08.865887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:08.866188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:08.866207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:08.870276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:08.870589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:08.870612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:08.874792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:08.875114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:08.875133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:08.879336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:08.879656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:08.879676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:08.883763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:08.884076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:08.884096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:08.888389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:08.888713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:08.888732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:08.892966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:08.893287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:08.893307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:08.898060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:08.898375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:08.898395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:08.904477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:08.904875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:08.904896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:08.910374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:08.910709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:08.910730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:08.916427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:08.916843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:08.916864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:08.923152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:08.923535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:08.923555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:08.929882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:08.930291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:08.930311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:08.936491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:08.936889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:08.936909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:08.943331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:08.943740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:08.943759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:08.950366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:08.950742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:08.950763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:08.957202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:08.957611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:08.957630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:08.964173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:08.964555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:08.964574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:08.972386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:08.972802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:08.972821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:08.979740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:08.980154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:08.980174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:08.987664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:08.988038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:08.988057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:08.995027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:08.995421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:08.995441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:09.002575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:09.002989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:09.003008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.262 [2024-07-15 08:03:09.010205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.262 [2024-07-15 08:03:09.010548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.262 [2024-07-15 08:03:09.010567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.522 [2024-07-15 08:03:09.016507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.522 [2024-07-15 08:03:09.016818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.522 [2024-07-15 08:03:09.016837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.522 [2024-07-15 08:03:09.021439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.522 [2024-07-15 08:03:09.021763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.522 [2024-07-15 08:03:09.021784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.522 [2024-07-15 08:03:09.026230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.522 [2024-07-15 08:03:09.026552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.522 [2024-07-15 08:03:09.026571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.522 [2024-07-15 08:03:09.030900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.522 [2024-07-15 08:03:09.031214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.522 [2024-07-15 08:03:09.031244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.522 [2024-07-15 08:03:09.035791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.522 [2024-07-15 08:03:09.036100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.522 [2024-07-15 08:03:09.036119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.522 [2024-07-15 08:03:09.040333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.522 [2024-07-15 08:03:09.040679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.522 [2024-07-15 08:03:09.040698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.522 [2024-07-15 08:03:09.045067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.522 [2024-07-15 08:03:09.045386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.522 [2024-07-15 08:03:09.045406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.522 [2024-07-15 08:03:09.050119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.522 [2024-07-15 08:03:09.050421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.522 [2024-07-15 08:03:09.050441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.522 [2024-07-15 08:03:09.054966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.522 [2024-07-15 08:03:09.055285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.522 [2024-07-15 08:03:09.055304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.522 [2024-07-15 08:03:09.059693] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.522 [2024-07-15 08:03:09.060029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.522 [2024-07-15 08:03:09.060048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.522 [2024-07-15 08:03:09.064363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.522 [2024-07-15 08:03:09.064677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.522 [2024-07-15 08:03:09.064696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.522 [2024-07-15 08:03:09.068791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.522 [2024-07-15 08:03:09.069109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.522 [2024-07-15 08:03:09.069128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.522 [2024-07-15 08:03:09.073362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.522 [2024-07-15 08:03:09.073682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.522 [2024-07-15 08:03:09.073701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.522 [2024-07-15 08:03:09.077978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.522 [2024-07-15 08:03:09.078299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.522 [2024-07-15 08:03:09.078319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.522 [2024-07-15 08:03:09.082629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.522 [2024-07-15 08:03:09.082956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.522 [2024-07-15 08:03:09.082975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.522 [2024-07-15 08:03:09.087592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.522 [2024-07-15 08:03:09.087911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.522 [2024-07-15 08:03:09.087931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.522 [2024-07-15 08:03:09.092261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.522 [2024-07-15 08:03:09.092583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.522 [2024-07-15 08:03:09.092602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.522 [2024-07-15 08:03:09.098283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1469810) with pdu=0x2000190fef90 00:27:24.522 [2024-07-15 08:03:09.098606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.522 [2024-07-15 08:03:09.098625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.522 00:27:24.522 Latency(us) 00:27:24.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.522 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:24.522 nvme0n1 : 2.00 5074.15 634.27 0.00 0.00 3148.35 1602.78 9061.06 00:27:24.522 =================================================================================================================== 00:27:24.522 Total : 5074.15 634.27 0.00 0.00 3148.35 1602.78 9061.06 00:27:24.522 0 00:27:24.522 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:24.522 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:24.522 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:24.522 | .driver_specific 00:27:24.522 | .nvme_error 00:27:24.522 | .status_code 00:27:24.522 | .command_transient_transport_error' 00:27:24.522 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:24.781 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 327 > 0 )) 00:27:24.781 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3403058 00:27:24.781 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3403058 ']' 00:27:24.781 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3403058 00:27:24.781 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:24.781 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:24.781 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3403058 00:27:24.781 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:24.781 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:24.781 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3403058' 00:27:24.781 killing process with pid 3403058 00:27:24.781 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3403058 00:27:24.781 Received shutdown signal, test time was about 2.000000 seconds 00:27:24.781 00:27:24.781 Latency(us) 00:27:24.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.781 =================================================================================================================== 00:27:24.781 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:24.781 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3403058 00:27:25.039 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3400850 00:27:25.039 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3400850 ']' 00:27:25.039 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3400850 00:27:25.039 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:25.039 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:25.039 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3400850 00:27:25.039 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:25.039 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:25.039 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3400850' 00:27:25.039 killing process with pid 3400850 00:27:25.039 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3400850 00:27:25.039 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3400850 00:27:25.039 00:27:25.039 real 0m16.908s 00:27:25.039 user 0m32.408s 00:27:25.039 sys 0m4.523s 00:27:25.039 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:25.039 08:03:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:25.039 ************************************ 00:27:25.039 END TEST nvmf_digest_error 00:27:25.039 ************************************ 00:27:25.297 08:03:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:27:25.297 08:03:09 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:25.297 08:03:09 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:25.297 08:03:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:25.297 08:03:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:27:25.297 08:03:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:25.297 08:03:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:27:25.297 08:03:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:25.297 08:03:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:25.297 rmmod nvme_tcp 00:27:25.297 rmmod nvme_fabrics 00:27:25.297 rmmod nvme_keyring 00:27:25.297 08:03:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:25.297 08:03:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:27:25.297 08:03:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:27:25.297 08:03:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3400850 ']' 00:27:25.297 08:03:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3400850 00:27:25.297 08:03:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 3400850 ']' 00:27:25.297 08:03:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 3400850 00:27:25.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3400850) - No such process 00:27:25.297 08:03:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 3400850 is not found' 00:27:25.297 Process with pid 3400850 is not found 00:27:25.297 08:03:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:25.297 08:03:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:25.297 08:03:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:25.297 08:03:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:25.297 08:03:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:25.297 08:03:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.297 08:03:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:25.297 08:03:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.196 08:03:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:27.455 00:27:27.455 real 0m42.097s 00:27:27.455 user 1m6.748s 00:27:27.455 sys 0m13.515s 00:27:27.455 08:03:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:27.455 08:03:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:27.455 ************************************ 00:27:27.455 END TEST nvmf_digest 00:27:27.455 ************************************ 00:27:27.455 08:03:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:27.455 08:03:11 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:27:27.455 08:03:11 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:27:27.455 08:03:11 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:27:27.455 08:03:11 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:27.455 08:03:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:27.455 08:03:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:27.455 08:03:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:27.455 ************************************ 00:27:27.455 START TEST nvmf_bdevperf 00:27:27.455 ************************************ 00:27:27.455 08:03:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:27.456 * Looking for test storage... 00:27:27.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:27.456 08:03:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:34.020 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:34.020 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:34.020 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:34.020 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:34.020 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:34.020 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:34.020 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:34.020 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:27:34.020 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:34.020 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:27:34.020 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:27:34.020 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:27:34.020 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:27:34.020 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:27:34.020 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:34.020 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:34.020 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:34.021 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:34.021 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:34.021 Found net devices under 0000:86:00.0: cvl_0_0 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:34.021 Found net devices under 0000:86:00.1: cvl_0_1 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:34.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:34.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:27:34.021 00:27:34.021 --- 10.0.0.2 ping statistics --- 00:27:34.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.021 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:34.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:34.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:27:34.021 00:27:34.021 --- 10.0.0.1 ping statistics --- 00:27:34.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.021 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3407664 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3407664 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3407664 ']' 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:34.021 08:03:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:34.021 [2024-07-15 08:03:17.933515] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:27:34.021 [2024-07-15 08:03:17.933562] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.021 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.021 [2024-07-15 08:03:18.003922] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:34.021 [2024-07-15 08:03:18.077446] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.021 [2024-07-15 08:03:18.077482] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.021 [2024-07-15 08:03:18.077489] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:34.021 [2024-07-15 08:03:18.077495] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:34.021 [2024-07-15 08:03:18.077501] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.021 [2024-07-15 08:03:18.077605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:34.021 [2024-07-15 08:03:18.077641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.021 [2024-07-15 08:03:18.077642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:34.021 08:03:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:34.021 08:03:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:27:34.021 08:03:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:34.021 08:03:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:34.021 08:03:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:34.279 [2024-07-15 08:03:18.785982] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:34.279 Malloc0 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:34.279 [2024-07-15 08:03:18.852005] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:34.279 { 00:27:34.279 "params": { 00:27:34.279 "name": "Nvme$subsystem", 00:27:34.279 "trtype": "$TEST_TRANSPORT", 00:27:34.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.279 "adrfam": "ipv4", 00:27:34.279 "trsvcid": "$NVMF_PORT", 00:27:34.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.279 "hdgst": ${hdgst:-false}, 00:27:34.279 "ddgst": ${ddgst:-false} 00:27:34.279 }, 00:27:34.279 "method": "bdev_nvme_attach_controller" 00:27:34.279 } 00:27:34.279 EOF 00:27:34.279 )") 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:27:34.279 08:03:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:34.279 "params": { 00:27:34.279 "name": "Nvme1", 00:27:34.279 "trtype": "tcp", 00:27:34.279 "traddr": "10.0.0.2", 00:27:34.279 "adrfam": "ipv4", 00:27:34.279 "trsvcid": "4420", 00:27:34.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:34.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:34.279 "hdgst": false, 00:27:34.279 "ddgst": false 00:27:34.279 }, 00:27:34.279 "method": "bdev_nvme_attach_controller" 00:27:34.279 }' 00:27:34.279 [2024-07-15 08:03:18.900744] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:27:34.279 [2024-07-15 08:03:18.900789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3407800 ] 00:27:34.279 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.279 [2024-07-15 08:03:18.966477] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.537 [2024-07-15 08:03:19.040213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.537 Running I/O for 1 seconds... 00:27:35.914 00:27:35.914 Latency(us) 00:27:35.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:35.914 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:35.914 Verification LBA range: start 0x0 length 0x4000 00:27:35.914 Nvme1n1 : 1.01 10985.31 42.91 0.00 0.00 11606.94 2236.77 9118.05 00:27:35.914 =================================================================================================================== 00:27:35.914 Total : 10985.31 42.91 0.00 0.00 11606.94 2236.77 9118.05 00:27:35.914 08:03:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3408103 00:27:35.914 08:03:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:35.914 08:03:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:35.914 08:03:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:35.914 08:03:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:27:35.914 08:03:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:27:35.914 08:03:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:35.914 08:03:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:35.914 { 00:27:35.914 "params": { 00:27:35.914 "name": "Nvme$subsystem", 00:27:35.914 "trtype": "$TEST_TRANSPORT", 00:27:35.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.914 "adrfam": "ipv4", 00:27:35.914 "trsvcid": "$NVMF_PORT", 00:27:35.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.914 "hdgst": ${hdgst:-false}, 00:27:35.914 "ddgst": ${ddgst:-false} 00:27:35.914 }, 00:27:35.914 "method": "bdev_nvme_attach_controller" 00:27:35.914 } 00:27:35.914 EOF 00:27:35.914 )") 00:27:35.914 08:03:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:27:35.914 08:03:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:27:35.914 08:03:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:27:35.914 08:03:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:35.914 "params": { 00:27:35.914 "name": "Nvme1", 00:27:35.914 "trtype": "tcp", 00:27:35.914 "traddr": "10.0.0.2", 00:27:35.914 "adrfam": "ipv4", 00:27:35.914 "trsvcid": "4420", 00:27:35.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:35.914 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:35.914 "hdgst": false, 00:27:35.914 "ddgst": false 00:27:35.914 }, 00:27:35.914 "method": "bdev_nvme_attach_controller" 00:27:35.914 }' 00:27:35.914 [2024-07-15 08:03:20.472669] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:27:35.914 [2024-07-15 08:03:20.472718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3408103 ] 00:27:35.914 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.914 [2024-07-15 08:03:20.540821] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.914 [2024-07-15 08:03:20.613614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.480 Running I/O for 15 seconds... 00:27:39.016 08:03:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3407664 00:27:39.016 08:03:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:39.016 [2024-07-15 08:03:23.441256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.016 [2024-07-15 08:03:23.441843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.016 [2024-07-15 08:03:23.441851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.441858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.441866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.441873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.441884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.441891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.441899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.441905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.441913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.441920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.441928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.441935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.441944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.441951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.441959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.017 [2024-07-15 08:03:23.441966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.441974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.441980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.441989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.441996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.442010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.442025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.442040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.442055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.442071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.442086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.442101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.442117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.442131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.442146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.442161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.442175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.442190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.442205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.017 [2024-07-15 08:03:23.442220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.017 [2024-07-15 08:03:23.442239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.017 [2024-07-15 08:03:23.442253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.017 [2024-07-15 08:03:23.442269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.017 [2024-07-15 08:03:23.442285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.017 [2024-07-15 08:03:23.442299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.017 [2024-07-15 08:03:23.442313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.017 [2024-07-15 08:03:23.442327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.442343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.442358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.442372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.442387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:85264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.442403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.442417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.442432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.442447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.442462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:85304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.442477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.017 [2024-07-15 08:03:23.442485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.017 [2024-07-15 08:03:23.442491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:85320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:85352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:85384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:85416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:85424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:85472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:85480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:85504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:85512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:85536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.018 [2024-07-15 08:03:23.442921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:85568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.442988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.442996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.443004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.443014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.443022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.443029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.443037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.443043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.443052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.443059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.443067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.443074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.443082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.443088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.443098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.443105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.443113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:85640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.018 [2024-07-15 08:03:23.443120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.018 [2024-07-15 08:03:23.443128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:85648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.019 [2024-07-15 08:03:23.443134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.019 [2024-07-15 08:03:23.443142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.019 [2024-07-15 08:03:23.443149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.019 [2024-07-15 08:03:23.443157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:85664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.019 [2024-07-15 08:03:23.443164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.019 [2024-07-15 08:03:23.443172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.019 [2024-07-15 08:03:23.443179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.019 [2024-07-15 08:03:23.443187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:85680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.019 [2024-07-15 08:03:23.443193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.019 [2024-07-15 08:03:23.443202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:85688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.019 [2024-07-15 08:03:23.443210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.019 [2024-07-15 08:03:23.443218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.019 [2024-07-15 08:03:23.443333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.019 [2024-07-15 08:03:23.443344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:85704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.019 [2024-07-15 08:03:23.443351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.019 [2024-07-15 08:03:23.443359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.019 [2024-07-15 08:03:23.443366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.019 [2024-07-15 08:03:23.443375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.019 [2024-07-15 08:03:23.443381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.019 [2024-07-15 08:03:23.443389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1940c70 is same with the state(5) to be set 00:27:39.019 [2024-07-15 08:03:23.443397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:39.019 [2024-07-15 08:03:23.443402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:39.019 [2024-07-15 08:03:23.443408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85728 len:8 PRP1 0x0 PRP2 0x0 00:27:39.019 [2024-07-15 08:03:23.443422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.019 [2024-07-15 08:03:23.443465] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1940c70 was disconnected and freed. reset controller. 00:27:39.019 [2024-07-15 08:03:23.446321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.019 [2024-07-15 08:03:23.446370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.019 [2024-07-15 08:03:23.446888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.019 [2024-07-15 08:03:23.446925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.019 [2024-07-15 08:03:23.446948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.019 [2024-07-15 08:03:23.447543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.019 [2024-07-15 08:03:23.448125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.019 [2024-07-15 08:03:23.448159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.019 [2024-07-15 08:03:23.448167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.019 [2024-07-15 08:03:23.451000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.019 [2024-07-15 08:03:23.459698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.019 [2024-07-15 08:03:23.460126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.019 [2024-07-15 08:03:23.460143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.019 [2024-07-15 08:03:23.460155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.019 [2024-07-15 08:03:23.460342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.019 [2024-07-15 08:03:23.460521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.019 [2024-07-15 08:03:23.460530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.019 [2024-07-15 08:03:23.460537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.019 [2024-07-15 08:03:23.463323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.019 [2024-07-15 08:03:23.472529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.019 [2024-07-15 08:03:23.472865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.019 [2024-07-15 08:03:23.472881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.019 [2024-07-15 08:03:23.472889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.019 [2024-07-15 08:03:23.473052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.019 [2024-07-15 08:03:23.473216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.019 [2024-07-15 08:03:23.473234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.019 [2024-07-15 08:03:23.473240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.019 [2024-07-15 08:03:23.475830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.019 [2024-07-15 08:03:23.485326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.019 [2024-07-15 08:03:23.485734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.019 [2024-07-15 08:03:23.485752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.019 [2024-07-15 08:03:23.485759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.019 [2024-07-15 08:03:23.485921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.019 [2024-07-15 08:03:23.486084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.019 [2024-07-15 08:03:23.486093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.019 [2024-07-15 08:03:23.486099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.019 [2024-07-15 08:03:23.488700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.019 [2024-07-15 08:03:23.498184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.019 [2024-07-15 08:03:23.498634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.019 [2024-07-15 08:03:23.498681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.019 [2024-07-15 08:03:23.498714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.019 [2024-07-15 08:03:23.499209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.019 [2024-07-15 08:03:23.499382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.019 [2024-07-15 08:03:23.499392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.019 [2024-07-15 08:03:23.499398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.019 [2024-07-15 08:03:23.501989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.019 [2024-07-15 08:03:23.511004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.019 [2024-07-15 08:03:23.511452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.019 [2024-07-15 08:03:23.511495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.019 [2024-07-15 08:03:23.511518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.019 [2024-07-15 08:03:23.512087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.019 [2024-07-15 08:03:23.512257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.019 [2024-07-15 08:03:23.512266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.019 [2024-07-15 08:03:23.512273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.019 [2024-07-15 08:03:23.514861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.019 [2024-07-15 08:03:23.523902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.019 [2024-07-15 08:03:23.524319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.019 [2024-07-15 08:03:23.524336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.019 [2024-07-15 08:03:23.524343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.019 [2024-07-15 08:03:23.524506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.019 [2024-07-15 08:03:23.524668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.019 [2024-07-15 08:03:23.524677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.019 [2024-07-15 08:03:23.524684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.019 [2024-07-15 08:03:23.527279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.019 [2024-07-15 08:03:23.536765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.019 [2024-07-15 08:03:23.537196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.019 [2024-07-15 08:03:23.537250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.019 [2024-07-15 08:03:23.537274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.019 [2024-07-15 08:03:23.537639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.019 [2024-07-15 08:03:23.537804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.020 [2024-07-15 08:03:23.537813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.020 [2024-07-15 08:03:23.537819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.020 [2024-07-15 08:03:23.540422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.020 [2024-07-15 08:03:23.549606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.020 [2024-07-15 08:03:23.550039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.020 [2024-07-15 08:03:23.550082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.020 [2024-07-15 08:03:23.550104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.020 [2024-07-15 08:03:23.550588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.020 [2024-07-15 08:03:23.550753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.020 [2024-07-15 08:03:23.550762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.020 [2024-07-15 08:03:23.550767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.020 [2024-07-15 08:03:23.553357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.020 [2024-07-15 08:03:23.562530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.020 [2024-07-15 08:03:23.562953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.020 [2024-07-15 08:03:23.562969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.020 [2024-07-15 08:03:23.562976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.020 [2024-07-15 08:03:23.563138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.020 [2024-07-15 08:03:23.563307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.020 [2024-07-15 08:03:23.563317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.020 [2024-07-15 08:03:23.563323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.020 [2024-07-15 08:03:23.565911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.020 [2024-07-15 08:03:23.575388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.020 [2024-07-15 08:03:23.575744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.020 [2024-07-15 08:03:23.575760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.020 [2024-07-15 08:03:23.575767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.020 [2024-07-15 08:03:23.575930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.020 [2024-07-15 08:03:23.576092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.020 [2024-07-15 08:03:23.576101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.020 [2024-07-15 08:03:23.576108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.020 [2024-07-15 08:03:23.578701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.020 [2024-07-15 08:03:23.588194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.020 [2024-07-15 08:03:23.588612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.020 [2024-07-15 08:03:23.588654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.020 [2024-07-15 08:03:23.588684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.020 [2024-07-15 08:03:23.589213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.020 [2024-07-15 08:03:23.589384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.020 [2024-07-15 08:03:23.589393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.020 [2024-07-15 08:03:23.589399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.020 [2024-07-15 08:03:23.591990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.020 [2024-07-15 08:03:23.601020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.020 [2024-07-15 08:03:23.601423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.020 [2024-07-15 08:03:23.601440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.020 [2024-07-15 08:03:23.601448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.020 [2024-07-15 08:03:23.601620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.020 [2024-07-15 08:03:23.601792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.020 [2024-07-15 08:03:23.601802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.020 [2024-07-15 08:03:23.601808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.020 [2024-07-15 08:03:23.604504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.020 [2024-07-15 08:03:23.613847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.020 [2024-07-15 08:03:23.614246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.020 [2024-07-15 08:03:23.614262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.020 [2024-07-15 08:03:23.614269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.020 [2024-07-15 08:03:23.614432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.020 [2024-07-15 08:03:23.614595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.020 [2024-07-15 08:03:23.614604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.020 [2024-07-15 08:03:23.614610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.020 [2024-07-15 08:03:23.617205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.020 [2024-07-15 08:03:23.626733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.020 [2024-07-15 08:03:23.627101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.020 [2024-07-15 08:03:23.627143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.020 [2024-07-15 08:03:23.627166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.020 [2024-07-15 08:03:23.627638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.020 [2024-07-15 08:03:23.627803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.020 [2024-07-15 08:03:23.627815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.020 [2024-07-15 08:03:23.627822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.020 [2024-07-15 08:03:23.630505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.020 [2024-07-15 08:03:23.639573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.020 [2024-07-15 08:03:23.639926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.020 [2024-07-15 08:03:23.639943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.020 [2024-07-15 08:03:23.639950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.020 [2024-07-15 08:03:23.640113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.020 [2024-07-15 08:03:23.640282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.020 [2024-07-15 08:03:23.640291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.020 [2024-07-15 08:03:23.640298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.020 [2024-07-15 08:03:23.642886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.020 [2024-07-15 08:03:23.652465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.020 [2024-07-15 08:03:23.652820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.020 [2024-07-15 08:03:23.652836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.020 [2024-07-15 08:03:23.652843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.020 [2024-07-15 08:03:23.653005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.021 [2024-07-15 08:03:23.653168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.021 [2024-07-15 08:03:23.653177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.021 [2024-07-15 08:03:23.653183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.021 [2024-07-15 08:03:23.655777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.021 [2024-07-15 08:03:23.665250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.021 [2024-07-15 08:03:23.665621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.021 [2024-07-15 08:03:23.665637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.021 [2024-07-15 08:03:23.665644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.021 [2024-07-15 08:03:23.665806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.021 [2024-07-15 08:03:23.665968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.021 [2024-07-15 08:03:23.665977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.021 [2024-07-15 08:03:23.665983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.021 [2024-07-15 08:03:23.668577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.021 [2024-07-15 08:03:23.678050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.021 [2024-07-15 08:03:23.678385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.021 [2024-07-15 08:03:23.678400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.021 [2024-07-15 08:03:23.678407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.021 [2024-07-15 08:03:23.678571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.021 [2024-07-15 08:03:23.678733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.021 [2024-07-15 08:03:23.678742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.021 [2024-07-15 08:03:23.678748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.021 [2024-07-15 08:03:23.681347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.021 [2024-07-15 08:03:23.690970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.021 [2024-07-15 08:03:23.691325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.021 [2024-07-15 08:03:23.691381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.021 [2024-07-15 08:03:23.691403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.021 [2024-07-15 08:03:23.691921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.021 [2024-07-15 08:03:23.692086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.021 [2024-07-15 08:03:23.692095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.021 [2024-07-15 08:03:23.692100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.021 [2024-07-15 08:03:23.694864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.021 [2024-07-15 08:03:23.704102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.021 [2024-07-15 08:03:23.704503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.021 [2024-07-15 08:03:23.704521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.021 [2024-07-15 08:03:23.704529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.021 [2024-07-15 08:03:23.704706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.021 [2024-07-15 08:03:23.704884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.021 [2024-07-15 08:03:23.704893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.021 [2024-07-15 08:03:23.704900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.021 [2024-07-15 08:03:23.707679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.021 [2024-07-15 08:03:23.716935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.021 [2024-07-15 08:03:23.717235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.021 [2024-07-15 08:03:23.717279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.021 [2024-07-15 08:03:23.717301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.021 [2024-07-15 08:03:23.717812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.021 [2024-07-15 08:03:23.717976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.021 [2024-07-15 08:03:23.717985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.021 [2024-07-15 08:03:23.717991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.021 [2024-07-15 08:03:23.720584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.021 [2024-07-15 08:03:23.729753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.021 [2024-07-15 08:03:23.730182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.021 [2024-07-15 08:03:23.730198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.021 [2024-07-15 08:03:23.730205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.021 [2024-07-15 08:03:23.730374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.021 [2024-07-15 08:03:23.730537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.021 [2024-07-15 08:03:23.730546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.021 [2024-07-15 08:03:23.730552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.021 [2024-07-15 08:03:23.733143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.021 [2024-07-15 08:03:23.742605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.021 [2024-07-15 08:03:23.743027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.021 [2024-07-15 08:03:23.743044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.021 [2024-07-15 08:03:23.743050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.021 [2024-07-15 08:03:23.743212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.021 [2024-07-15 08:03:23.743380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.021 [2024-07-15 08:03:23.743389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.021 [2024-07-15 08:03:23.743395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.021 [2024-07-15 08:03:23.745982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.021 [2024-07-15 08:03:23.755471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.021 [2024-07-15 08:03:23.755840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.021 [2024-07-15 08:03:23.755883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.021 [2024-07-15 08:03:23.755906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.021 [2024-07-15 08:03:23.756501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.021 [2024-07-15 08:03:23.756982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.021 [2024-07-15 08:03:23.756991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.021 [2024-07-15 08:03:23.757000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.021 [2024-07-15 08:03:23.759612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.281 [2024-07-15 08:03:23.768489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.281 [2024-07-15 08:03:23.768864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.281 [2024-07-15 08:03:23.768880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.281 [2024-07-15 08:03:23.768887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.281 [2024-07-15 08:03:23.769050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.281 [2024-07-15 08:03:23.769214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.281 [2024-07-15 08:03:23.769223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.281 [2024-07-15 08:03:23.769237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.281 [2024-07-15 08:03:23.771831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.281 [2024-07-15 08:03:23.781333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.281 [2024-07-15 08:03:23.781749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.281 [2024-07-15 08:03:23.781765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.281 [2024-07-15 08:03:23.781772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.281 [2024-07-15 08:03:23.781935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.281 [2024-07-15 08:03:23.782098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.281 [2024-07-15 08:03:23.782108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.281 [2024-07-15 08:03:23.782114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.281 [2024-07-15 08:03:23.784732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.281 [2024-07-15 08:03:23.794140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.281 [2024-07-15 08:03:23.794494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.281 [2024-07-15 08:03:23.794510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.281 [2024-07-15 08:03:23.794517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.281 [2024-07-15 08:03:23.794680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.281 [2024-07-15 08:03:23.794842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.281 [2024-07-15 08:03:23.794851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.281 [2024-07-15 08:03:23.794858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.281 [2024-07-15 08:03:23.797457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.281 [2024-07-15 08:03:23.806944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.281 [2024-07-15 08:03:23.807360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.281 [2024-07-15 08:03:23.807415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.281 [2024-07-15 08:03:23.807438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.281 [2024-07-15 08:03:23.808016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.281 [2024-07-15 08:03:23.808624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.281 [2024-07-15 08:03:23.808645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.281 [2024-07-15 08:03:23.808659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.281 [2024-07-15 08:03:23.814893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.281 [2024-07-15 08:03:23.821892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.281 [2024-07-15 08:03:23.822414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.282 [2024-07-15 08:03:23.822469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.282 [2024-07-15 08:03:23.822491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.282 [2024-07-15 08:03:23.823024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.282 [2024-07-15 08:03:23.823286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.282 [2024-07-15 08:03:23.823299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.282 [2024-07-15 08:03:23.823308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.282 [2024-07-15 08:03:23.827359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.282 [2024-07-15 08:03:23.834842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.282 [2024-07-15 08:03:23.835283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.282 [2024-07-15 08:03:23.835326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.282 [2024-07-15 08:03:23.835348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.282 [2024-07-15 08:03:23.835928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.282 [2024-07-15 08:03:23.836154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.282 [2024-07-15 08:03:23.836164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.282 [2024-07-15 08:03:23.836170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.282 [2024-07-15 08:03:23.838875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.282 [2024-07-15 08:03:23.847748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.282 [2024-07-15 08:03:23.848180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.282 [2024-07-15 08:03:23.848196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.282 [2024-07-15 08:03:23.848203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.282 [2024-07-15 08:03:23.848373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.282 [2024-07-15 08:03:23.848540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.282 [2024-07-15 08:03:23.848549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.282 [2024-07-15 08:03:23.848555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.282 [2024-07-15 08:03:23.851144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.282 [2024-07-15 08:03:23.860629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.282 [2024-07-15 08:03:23.861026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.282 [2024-07-15 08:03:23.861042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.282 [2024-07-15 08:03:23.861049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.282 [2024-07-15 08:03:23.861211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.282 [2024-07-15 08:03:23.861382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.282 [2024-07-15 08:03:23.861391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.282 [2024-07-15 08:03:23.861397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.282 [2024-07-15 08:03:23.863989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.282 [2024-07-15 08:03:23.873478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.282 [2024-07-15 08:03:23.873899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.282 [2024-07-15 08:03:23.873915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.282 [2024-07-15 08:03:23.873923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.282 [2024-07-15 08:03:23.874085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.282 [2024-07-15 08:03:23.874254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.282 [2024-07-15 08:03:23.874273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.282 [2024-07-15 08:03:23.874279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.282 [2024-07-15 08:03:23.876868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.282 [2024-07-15 08:03:23.886360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.282 [2024-07-15 08:03:23.886785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.282 [2024-07-15 08:03:23.886802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.282 [2024-07-15 08:03:23.886809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.282 [2024-07-15 08:03:23.886972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.282 [2024-07-15 08:03:23.887135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.282 [2024-07-15 08:03:23.887144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.282 [2024-07-15 08:03:23.887150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.282 [2024-07-15 08:03:23.889751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.282 [2024-07-15 08:03:23.899233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.282 [2024-07-15 08:03:23.899659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.282 [2024-07-15 08:03:23.899674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.282 [2024-07-15 08:03:23.899681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.282 [2024-07-15 08:03:23.899842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.282 [2024-07-15 08:03:23.900005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.282 [2024-07-15 08:03:23.900014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.282 [2024-07-15 08:03:23.900020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.282 [2024-07-15 08:03:23.902617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.282 [2024-07-15 08:03:23.912040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.282 [2024-07-15 08:03:23.912502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.282 [2024-07-15 08:03:23.912519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.282 [2024-07-15 08:03:23.912526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.282 [2024-07-15 08:03:23.912698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.282 [2024-07-15 08:03:23.912869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.282 [2024-07-15 08:03:23.912879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.282 [2024-07-15 08:03:23.912885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.282 [2024-07-15 08:03:23.915525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.282 [2024-07-15 08:03:23.924950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.282 [2024-07-15 08:03:23.925348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.282 [2024-07-15 08:03:23.925365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.282 [2024-07-15 08:03:23.925372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.282 [2024-07-15 08:03:23.925535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.282 [2024-07-15 08:03:23.925698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.282 [2024-07-15 08:03:23.925707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.282 [2024-07-15 08:03:23.925713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.282 [2024-07-15 08:03:23.928307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.282 [2024-07-15 08:03:23.937896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.282 [2024-07-15 08:03:23.938295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.282 [2024-07-15 08:03:23.938312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.282 [2024-07-15 08:03:23.938322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.282 [2024-07-15 08:03:23.938486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.282 [2024-07-15 08:03:23.938651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.282 [2024-07-15 08:03:23.938661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.282 [2024-07-15 08:03:23.938667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.282 [2024-07-15 08:03:23.941265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.282 [2024-07-15 08:03:23.950982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.282 [2024-07-15 08:03:23.951400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.282 [2024-07-15 08:03:23.951444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.282 [2024-07-15 08:03:23.951466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.282 [2024-07-15 08:03:23.952044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.282 [2024-07-15 08:03:23.952276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.282 [2024-07-15 08:03:23.952285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.282 [2024-07-15 08:03:23.952291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.282 [2024-07-15 08:03:23.954880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.283 [2024-07-15 08:03:23.963908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.283 [2024-07-15 08:03:23.964346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.283 [2024-07-15 08:03:23.964390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.283 [2024-07-15 08:03:23.964412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.283 [2024-07-15 08:03:23.964991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.283 [2024-07-15 08:03:23.965238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.283 [2024-07-15 08:03:23.965247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.283 [2024-07-15 08:03:23.965253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.283 [2024-07-15 08:03:23.967838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.283 [2024-07-15 08:03:23.976708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.283 [2024-07-15 08:03:23.977135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.283 [2024-07-15 08:03:23.977177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.283 [2024-07-15 08:03:23.977199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.283 [2024-07-15 08:03:23.977669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.283 [2024-07-15 08:03:23.977833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.283 [2024-07-15 08:03:23.977848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.283 [2024-07-15 08:03:23.977855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.283 [2024-07-15 08:03:23.980456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.283 [2024-07-15 08:03:23.989567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.283 [2024-07-15 08:03:23.989989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.283 [2024-07-15 08:03:23.990005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.283 [2024-07-15 08:03:23.990012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.283 [2024-07-15 08:03:23.990175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.283 [2024-07-15 08:03:23.990345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.283 [2024-07-15 08:03:23.990355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.283 [2024-07-15 08:03:23.990361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.283 [2024-07-15 08:03:23.992950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.283 [2024-07-15 08:03:24.002530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.283 [2024-07-15 08:03:24.002966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.283 [2024-07-15 08:03:24.002983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.283 [2024-07-15 08:03:24.002990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.283 [2024-07-15 08:03:24.003153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.283 [2024-07-15 08:03:24.003323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.283 [2024-07-15 08:03:24.003333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.283 [2024-07-15 08:03:24.003339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.283 [2024-07-15 08:03:24.005927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.283 [2024-07-15 08:03:24.015409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.283 [2024-07-15 08:03:24.015766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.283 [2024-07-15 08:03:24.015783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.283 [2024-07-15 08:03:24.015791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.283 [2024-07-15 08:03:24.015953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.283 [2024-07-15 08:03:24.016116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.283 [2024-07-15 08:03:24.016125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.283 [2024-07-15 08:03:24.016131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.283 [2024-07-15 08:03:24.018765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.283 [2024-07-15 08:03:24.028327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.283 [2024-07-15 08:03:24.028764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.283 [2024-07-15 08:03:24.028807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.283 [2024-07-15 08:03:24.028829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.283 [2024-07-15 08:03:24.029361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.283 [2024-07-15 08:03:24.029535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.283 [2024-07-15 08:03:24.029544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.283 [2024-07-15 08:03:24.029551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.544 [2024-07-15 08:03:24.032300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.544 [2024-07-15 08:03:24.041307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.544 [2024-07-15 08:03:24.041667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.544 [2024-07-15 08:03:24.041710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.544 [2024-07-15 08:03:24.041733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.544 [2024-07-15 08:03:24.042328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.544 [2024-07-15 08:03:24.042818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.544 [2024-07-15 08:03:24.042827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.544 [2024-07-15 08:03:24.042833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.544 [2024-07-15 08:03:24.045424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.544 [2024-07-15 08:03:24.054144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.544 [2024-07-15 08:03:24.054567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.544 [2024-07-15 08:03:24.054583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.544 [2024-07-15 08:03:24.054591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.544 [2024-07-15 08:03:24.054753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.544 [2024-07-15 08:03:24.054916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.544 [2024-07-15 08:03:24.054925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.544 [2024-07-15 08:03:24.054931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.544 [2024-07-15 08:03:24.057526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.544 [2024-07-15 08:03:24.067005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.544 [2024-07-15 08:03:24.067380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.544 [2024-07-15 08:03:24.067425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.544 [2024-07-15 08:03:24.067454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.544 [2024-07-15 08:03:24.068033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.544 [2024-07-15 08:03:24.068597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.544 [2024-07-15 08:03:24.068607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.544 [2024-07-15 08:03:24.068613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.544 [2024-07-15 08:03:24.071203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.544 [2024-07-15 08:03:24.079925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.544 [2024-07-15 08:03:24.080353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.544 [2024-07-15 08:03:24.080369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.544 [2024-07-15 08:03:24.080377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.544 [2024-07-15 08:03:24.080540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.544 [2024-07-15 08:03:24.080703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.544 [2024-07-15 08:03:24.080712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.544 [2024-07-15 08:03:24.080718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.544 [2024-07-15 08:03:24.083312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.544 [2024-07-15 08:03:24.092730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.544 [2024-07-15 08:03:24.093164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.544 [2024-07-15 08:03:24.093206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.544 [2024-07-15 08:03:24.093243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.544 [2024-07-15 08:03:24.093753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.544 [2024-07-15 08:03:24.093916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.544 [2024-07-15 08:03:24.093924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.544 [2024-07-15 08:03:24.093930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.544 [2024-07-15 08:03:24.096520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.544 [2024-07-15 08:03:24.105542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.544 [2024-07-15 08:03:24.105971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.544 [2024-07-15 08:03:24.106014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.544 [2024-07-15 08:03:24.106037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.544 [2024-07-15 08:03:24.106521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.544 [2024-07-15 08:03:24.106686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.544 [2024-07-15 08:03:24.106696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.544 [2024-07-15 08:03:24.106702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.544 [2024-07-15 08:03:24.109362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.544 [2024-07-15 08:03:24.118407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.544 [2024-07-15 08:03:24.118832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.544 [2024-07-15 08:03:24.118848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.544 [2024-07-15 08:03:24.118855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.544 [2024-07-15 08:03:24.119018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.544 [2024-07-15 08:03:24.119181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.544 [2024-07-15 08:03:24.119190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.544 [2024-07-15 08:03:24.119196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.544 [2024-07-15 08:03:24.121882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.545 [2024-07-15 08:03:24.131218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.545 [2024-07-15 08:03:24.131631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.545 [2024-07-15 08:03:24.131673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.545 [2024-07-15 08:03:24.131695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.545 [2024-07-15 08:03:24.132288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.545 [2024-07-15 08:03:24.132788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.545 [2024-07-15 08:03:24.132797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.545 [2024-07-15 08:03:24.132803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.545 [2024-07-15 08:03:24.135394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.545 [2024-07-15 08:03:24.144086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.545 [2024-07-15 08:03:24.144445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.545 [2024-07-15 08:03:24.144462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.545 [2024-07-15 08:03:24.144469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.545 [2024-07-15 08:03:24.144632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.545 [2024-07-15 08:03:24.144795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.545 [2024-07-15 08:03:24.144804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.545 [2024-07-15 08:03:24.144810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.545 [2024-07-15 08:03:24.147408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.545 [2024-07-15 08:03:24.156927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.545 [2024-07-15 08:03:24.157362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.545 [2024-07-15 08:03:24.157406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.545 [2024-07-15 08:03:24.157428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.545 [2024-07-15 08:03:24.157922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.545 [2024-07-15 08:03:24.158086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.545 [2024-07-15 08:03:24.158094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.545 [2024-07-15 08:03:24.158100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.545 [2024-07-15 08:03:24.160693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.545 [2024-07-15 08:03:24.169716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.545 [2024-07-15 08:03:24.170068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.545 [2024-07-15 08:03:24.170085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.545 [2024-07-15 08:03:24.170092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.545 [2024-07-15 08:03:24.170262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.545 [2024-07-15 08:03:24.170425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.545 [2024-07-15 08:03:24.170434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.545 [2024-07-15 08:03:24.170440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.545 [2024-07-15 08:03:24.173029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.545 [2024-07-15 08:03:24.182515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.545 [2024-07-15 08:03:24.182936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.545 [2024-07-15 08:03:24.182952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.545 [2024-07-15 08:03:24.182959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.545 [2024-07-15 08:03:24.183120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.545 [2024-07-15 08:03:24.183289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.545 [2024-07-15 08:03:24.183298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.545 [2024-07-15 08:03:24.183304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.545 [2024-07-15 08:03:24.185887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.545 [2024-07-15 08:03:24.195372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.545 [2024-07-15 08:03:24.195819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.545 [2024-07-15 08:03:24.195860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.545 [2024-07-15 08:03:24.195883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.545 [2024-07-15 08:03:24.196486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.545 [2024-07-15 08:03:24.196986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.545 [2024-07-15 08:03:24.196996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.545 [2024-07-15 08:03:24.197002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.545 [2024-07-15 08:03:24.199749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.545 [2024-07-15 08:03:24.208318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.545 [2024-07-15 08:03:24.208597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.545 [2024-07-15 08:03:24.208613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.545 [2024-07-15 08:03:24.208621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.545 [2024-07-15 08:03:24.208783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.545 [2024-07-15 08:03:24.208946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.545 [2024-07-15 08:03:24.208954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.545 [2024-07-15 08:03:24.208960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.545 [2024-07-15 08:03:24.211557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.545 [2024-07-15 08:03:24.221198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.545 [2024-07-15 08:03:24.221623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.545 [2024-07-15 08:03:24.221640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.545 [2024-07-15 08:03:24.221646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.545 [2024-07-15 08:03:24.221809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.545 [2024-07-15 08:03:24.221973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.545 [2024-07-15 08:03:24.221982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.545 [2024-07-15 08:03:24.221988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.545 [2024-07-15 08:03:24.224616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.545 [2024-07-15 08:03:24.234197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.545 [2024-07-15 08:03:24.234631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.545 [2024-07-15 08:03:24.234649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.545 [2024-07-15 08:03:24.234656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.545 [2024-07-15 08:03:24.234819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.545 [2024-07-15 08:03:24.234982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.545 [2024-07-15 08:03:24.234991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.545 [2024-07-15 08:03:24.235001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.545 [2024-07-15 08:03:24.237714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.545 [2024-07-15 08:03:24.247141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.545 [2024-07-15 08:03:24.247548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.545 [2024-07-15 08:03:24.247565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.545 [2024-07-15 08:03:24.247572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.545 [2024-07-15 08:03:24.247734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.545 [2024-07-15 08:03:24.247897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.545 [2024-07-15 08:03:24.247906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.545 [2024-07-15 08:03:24.247912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.545 [2024-07-15 08:03:24.250510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.545 [2024-07-15 08:03:24.259983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.545 [2024-07-15 08:03:24.260379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.545 [2024-07-15 08:03:24.260396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.545 [2024-07-15 08:03:24.260403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.545 [2024-07-15 08:03:24.260565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.545 [2024-07-15 08:03:24.260727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.546 [2024-07-15 08:03:24.260737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.546 [2024-07-15 08:03:24.260743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.546 [2024-07-15 08:03:24.263340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.546 [2024-07-15 08:03:24.272822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.546 [2024-07-15 08:03:24.273241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.546 [2024-07-15 08:03:24.273289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.546 [2024-07-15 08:03:24.273311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.546 [2024-07-15 08:03:24.273883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.546 [2024-07-15 08:03:24.274048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.546 [2024-07-15 08:03:24.274057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.546 [2024-07-15 08:03:24.274063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.546 [2024-07-15 08:03:24.276657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.546 [2024-07-15 08:03:24.285685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.546 [2024-07-15 08:03:24.286109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.546 [2024-07-15 08:03:24.286163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.546 [2024-07-15 08:03:24.286186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.546 [2024-07-15 08:03:24.286778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.546 [2024-07-15 08:03:24.287269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.546 [2024-07-15 08:03:24.287279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.546 [2024-07-15 08:03:24.287285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.546 [2024-07-15 08:03:24.289924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.806 [2024-07-15 08:03:24.298621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.806 [2024-07-15 08:03:24.299063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.806 [2024-07-15 08:03:24.299080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.806 [2024-07-15 08:03:24.299088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.806 [2024-07-15 08:03:24.299272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.806 [2024-07-15 08:03:24.299447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.806 [2024-07-15 08:03:24.299457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.806 [2024-07-15 08:03:24.299464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.806 [2024-07-15 08:03:24.302279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.806 [2024-07-15 08:03:24.311555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.806 [2024-07-15 08:03:24.311951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.806 [2024-07-15 08:03:24.311967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.806 [2024-07-15 08:03:24.311974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.806 [2024-07-15 08:03:24.312136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.806 [2024-07-15 08:03:24.312306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.806 [2024-07-15 08:03:24.312315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.806 [2024-07-15 08:03:24.312321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.806 [2024-07-15 08:03:24.314912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.806 [2024-07-15 08:03:24.324398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.806 [2024-07-15 08:03:24.324742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.806 [2024-07-15 08:03:24.324758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.806 [2024-07-15 08:03:24.324766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.806 [2024-07-15 08:03:24.324928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.806 [2024-07-15 08:03:24.325094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.806 [2024-07-15 08:03:24.325103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.806 [2024-07-15 08:03:24.325109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.806 [2024-07-15 08:03:24.327707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.806 [2024-07-15 08:03:24.337361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.806 [2024-07-15 08:03:24.337798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.806 [2024-07-15 08:03:24.337814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.806 [2024-07-15 08:03:24.337821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.806 [2024-07-15 08:03:24.337984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.806 [2024-07-15 08:03:24.338147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.806 [2024-07-15 08:03:24.338156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.806 [2024-07-15 08:03:24.338163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.806 [2024-07-15 08:03:24.340765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.806 [2024-07-15 08:03:24.350250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.806 [2024-07-15 08:03:24.350652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.806 [2024-07-15 08:03:24.350669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.806 [2024-07-15 08:03:24.350676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.806 [2024-07-15 08:03:24.350838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.806 [2024-07-15 08:03:24.351001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.806 [2024-07-15 08:03:24.351010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.806 [2024-07-15 08:03:24.351016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.806 [2024-07-15 08:03:24.353610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.806 [2024-07-15 08:03:24.363087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.806 [2024-07-15 08:03:24.363513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.806 [2024-07-15 08:03:24.363530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.806 [2024-07-15 08:03:24.363536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.806 [2024-07-15 08:03:24.363699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.806 [2024-07-15 08:03:24.363862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.806 [2024-07-15 08:03:24.363871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.806 [2024-07-15 08:03:24.363877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.806 [2024-07-15 08:03:24.366476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.806 [2024-07-15 08:03:24.375982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.806 [2024-07-15 08:03:24.376389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.806 [2024-07-15 08:03:24.376407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.806 [2024-07-15 08:03:24.376414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.806 [2024-07-15 08:03:24.376577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.806 [2024-07-15 08:03:24.376739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.806 [2024-07-15 08:03:24.376748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.806 [2024-07-15 08:03:24.376755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.806 [2024-07-15 08:03:24.379347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.806 [2024-07-15 08:03:24.388966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.807 [2024-07-15 08:03:24.389371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.807 [2024-07-15 08:03:24.389388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.807 [2024-07-15 08:03:24.389396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.807 [2024-07-15 08:03:24.389961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.807 [2024-07-15 08:03:24.390261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.807 [2024-07-15 08:03:24.390271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.807 [2024-07-15 08:03:24.390279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.807 [2024-07-15 08:03:24.392871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.807 [2024-07-15 08:03:24.401776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.807 [2024-07-15 08:03:24.402133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.807 [2024-07-15 08:03:24.402150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.807 [2024-07-15 08:03:24.402157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.807 [2024-07-15 08:03:24.402327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.807 [2024-07-15 08:03:24.402491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.807 [2024-07-15 08:03:24.402500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.807 [2024-07-15 08:03:24.402506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.807 [2024-07-15 08:03:24.405100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.807 [2024-07-15 08:03:24.414615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.807 [2024-07-15 08:03:24.414976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.807 [2024-07-15 08:03:24.415018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.807 [2024-07-15 08:03:24.415048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.807 [2024-07-15 08:03:24.415567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.807 [2024-07-15 08:03:24.415732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.807 [2024-07-15 08:03:24.415742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.807 [2024-07-15 08:03:24.415748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.807 [2024-07-15 08:03:24.418344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.807 [2024-07-15 08:03:24.427568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.807 [2024-07-15 08:03:24.427929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.807 [2024-07-15 08:03:24.427947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.807 [2024-07-15 08:03:24.427954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.807 [2024-07-15 08:03:24.428131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.807 [2024-07-15 08:03:24.428314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.807 [2024-07-15 08:03:24.428324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.807 [2024-07-15 08:03:24.428331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.807 [2024-07-15 08:03:24.431166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.807 [2024-07-15 08:03:24.440730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.807 [2024-07-15 08:03:24.441169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.807 [2024-07-15 08:03:24.441187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.807 [2024-07-15 08:03:24.441194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.807 [2024-07-15 08:03:24.441372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.807 [2024-07-15 08:03:24.441546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.807 [2024-07-15 08:03:24.441555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.807 [2024-07-15 08:03:24.441564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.807 [2024-07-15 08:03:24.444276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.807 [2024-07-15 08:03:24.453629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.807 [2024-07-15 08:03:24.454040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.807 [2024-07-15 08:03:24.454082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.807 [2024-07-15 08:03:24.454106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.807 [2024-07-15 08:03:24.454616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.807 [2024-07-15 08:03:24.454781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.807 [2024-07-15 08:03:24.454794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.807 [2024-07-15 08:03:24.454801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.807 [2024-07-15 08:03:24.457646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.807 [2024-07-15 08:03:24.466771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.807 [2024-07-15 08:03:24.467215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.807 [2024-07-15 08:03:24.467239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.807 [2024-07-15 08:03:24.467247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.807 [2024-07-15 08:03:24.467426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.807 [2024-07-15 08:03:24.467604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.807 [2024-07-15 08:03:24.467614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.807 [2024-07-15 08:03:24.467620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.807 [2024-07-15 08:03:24.470449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.807 [2024-07-15 08:03:24.479812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.807 [2024-07-15 08:03:24.480255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.807 [2024-07-15 08:03:24.480273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.807 [2024-07-15 08:03:24.480282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.807 [2024-07-15 08:03:24.480460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.807 [2024-07-15 08:03:24.480637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.807 [2024-07-15 08:03:24.480647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.807 [2024-07-15 08:03:24.480654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.807 [2024-07-15 08:03:24.483483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.807 [2024-07-15 08:03:24.493007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.807 [2024-07-15 08:03:24.493465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.807 [2024-07-15 08:03:24.493483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.807 [2024-07-15 08:03:24.493491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.807 [2024-07-15 08:03:24.493669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.807 [2024-07-15 08:03:24.493845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.807 [2024-07-15 08:03:24.493855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.807 [2024-07-15 08:03:24.493861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.807 [2024-07-15 08:03:24.496685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.807 [2024-07-15 08:03:24.506195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.807 [2024-07-15 08:03:24.506635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.807 [2024-07-15 08:03:24.506653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.807 [2024-07-15 08:03:24.506660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.807 [2024-07-15 08:03:24.506837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.807 [2024-07-15 08:03:24.507035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.807 [2024-07-15 08:03:24.507045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.807 [2024-07-15 08:03:24.507051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.807 [2024-07-15 08:03:24.509904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.807 [2024-07-15 08:03:24.519256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.807 [2024-07-15 08:03:24.519687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.807 [2024-07-15 08:03:24.519705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.807 [2024-07-15 08:03:24.519712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.807 [2024-07-15 08:03:24.519890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.807 [2024-07-15 08:03:24.520068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.807 [2024-07-15 08:03:24.520077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.807 [2024-07-15 08:03:24.520084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.808 [2024-07-15 08:03:24.522911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.808 [2024-07-15 08:03:24.532424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.808 [2024-07-15 08:03:24.532860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.808 [2024-07-15 08:03:24.532877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.808 [2024-07-15 08:03:24.532884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.808 [2024-07-15 08:03:24.533062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.808 [2024-07-15 08:03:24.533244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.808 [2024-07-15 08:03:24.533255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.808 [2024-07-15 08:03:24.533261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.808 [2024-07-15 08:03:24.536081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.808 [2024-07-15 08:03:24.545676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.808 [2024-07-15 08:03:24.546123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.808 [2024-07-15 08:03:24.546141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:39.808 [2024-07-15 08:03:24.546148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:39.808 [2024-07-15 08:03:24.546341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:39.808 [2024-07-15 08:03:24.546526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.808 [2024-07-15 08:03:24.546536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.808 [2024-07-15 08:03:24.546543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.808 [2024-07-15 08:03:24.549409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.068 [2024-07-15 08:03:24.558766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.068 [2024-07-15 08:03:24.559207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.068 [2024-07-15 08:03:24.559229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.068 [2024-07-15 08:03:24.559237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.068 [2024-07-15 08:03:24.559415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.068 [2024-07-15 08:03:24.559594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.068 [2024-07-15 08:03:24.559603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.068 [2024-07-15 08:03:24.559610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.068 [2024-07-15 08:03:24.562437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.068 [2024-07-15 08:03:24.571959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.068 [2024-07-15 08:03:24.572404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.068 [2024-07-15 08:03:24.572421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.068 [2024-07-15 08:03:24.572429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.068 [2024-07-15 08:03:24.572606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.068 [2024-07-15 08:03:24.572786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.068 [2024-07-15 08:03:24.572796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.068 [2024-07-15 08:03:24.572802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.068 [2024-07-15 08:03:24.575629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.068 [2024-07-15 08:03:24.585134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.068 [2024-07-15 08:03:24.585489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.068 [2024-07-15 08:03:24.585507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.068 [2024-07-15 08:03:24.585514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.068 [2024-07-15 08:03:24.585691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.068 [2024-07-15 08:03:24.585870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.068 [2024-07-15 08:03:24.585880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.068 [2024-07-15 08:03:24.585890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.068 [2024-07-15 08:03:24.588721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.068 [2024-07-15 08:03:24.598237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.068 [2024-07-15 08:03:24.598678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.068 [2024-07-15 08:03:24.598695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.069 [2024-07-15 08:03:24.598702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.069 [2024-07-15 08:03:24.598881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.069 [2024-07-15 08:03:24.599059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.069 [2024-07-15 08:03:24.599070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.069 [2024-07-15 08:03:24.599077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.069 [2024-07-15 08:03:24.601912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.069 [2024-07-15 08:03:24.611433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.069 [2024-07-15 08:03:24.611872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.069 [2024-07-15 08:03:24.611890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.069 [2024-07-15 08:03:24.611897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.069 [2024-07-15 08:03:24.612074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.069 [2024-07-15 08:03:24.612258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.069 [2024-07-15 08:03:24.612268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.069 [2024-07-15 08:03:24.612275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.069 [2024-07-15 08:03:24.615095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.069 [2024-07-15 08:03:24.624614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.069 [2024-07-15 08:03:24.625044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.069 [2024-07-15 08:03:24.625061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.069 [2024-07-15 08:03:24.625069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.069 [2024-07-15 08:03:24.625254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.069 [2024-07-15 08:03:24.625433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.069 [2024-07-15 08:03:24.625443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.069 [2024-07-15 08:03:24.625450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.069 [2024-07-15 08:03:24.628284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.069 [2024-07-15 08:03:24.637819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.069 [2024-07-15 08:03:24.638280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.069 [2024-07-15 08:03:24.638298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.069 [2024-07-15 08:03:24.638305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.069 [2024-07-15 08:03:24.638482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.069 [2024-07-15 08:03:24.638663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.069 [2024-07-15 08:03:24.638673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.069 [2024-07-15 08:03:24.638680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.069 [2024-07-15 08:03:24.641515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.069 [2024-07-15 08:03:24.651028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.069 [2024-07-15 08:03:24.651455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.069 [2024-07-15 08:03:24.651473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.069 [2024-07-15 08:03:24.651480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.069 [2024-07-15 08:03:24.651657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.069 [2024-07-15 08:03:24.651835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.069 [2024-07-15 08:03:24.651844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.069 [2024-07-15 08:03:24.651851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.069 [2024-07-15 08:03:24.654681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.069 [2024-07-15 08:03:24.664203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.069 [2024-07-15 08:03:24.664578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.069 [2024-07-15 08:03:24.664596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.069 [2024-07-15 08:03:24.664604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.069 [2024-07-15 08:03:24.664782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.069 [2024-07-15 08:03:24.664961] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.069 [2024-07-15 08:03:24.664970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.069 [2024-07-15 08:03:24.664976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.069 [2024-07-15 08:03:24.667806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.069 [2024-07-15 08:03:24.677320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.069 [2024-07-15 08:03:24.677759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.069 [2024-07-15 08:03:24.677776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.069 [2024-07-15 08:03:24.677783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.069 [2024-07-15 08:03:24.677965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.069 [2024-07-15 08:03:24.678142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.069 [2024-07-15 08:03:24.678152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.069 [2024-07-15 08:03:24.678158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.069 [2024-07-15 08:03:24.680993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.069 [2024-07-15 08:03:24.690403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.069 [2024-07-15 08:03:24.690822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.069 [2024-07-15 08:03:24.690839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.069 [2024-07-15 08:03:24.690846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.069 [2024-07-15 08:03:24.691024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.069 [2024-07-15 08:03:24.691200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.069 [2024-07-15 08:03:24.691210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.069 [2024-07-15 08:03:24.691217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.069 [2024-07-15 08:03:24.694046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.069 [2024-07-15 08:03:24.703575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.069 [2024-07-15 08:03:24.703958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.069 [2024-07-15 08:03:24.703976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.069 [2024-07-15 08:03:24.703983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.069 [2024-07-15 08:03:24.704160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.069 [2024-07-15 08:03:24.704343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.069 [2024-07-15 08:03:24.704353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.069 [2024-07-15 08:03:24.704360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.069 [2024-07-15 08:03:24.707198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.069 [2024-07-15 08:03:24.716745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.069 [2024-07-15 08:03:24.717187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.069 [2024-07-15 08:03:24.717204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.069 [2024-07-15 08:03:24.717212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.069 [2024-07-15 08:03:24.717396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.069 [2024-07-15 08:03:24.717574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.069 [2024-07-15 08:03:24.717583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.069 [2024-07-15 08:03:24.717594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.069 [2024-07-15 08:03:24.720421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.069 [2024-07-15 08:03:24.729935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.069 [2024-07-15 08:03:24.730391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.069 [2024-07-15 08:03:24.730409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.069 [2024-07-15 08:03:24.730416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.069 [2024-07-15 08:03:24.730593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.069 [2024-07-15 08:03:24.730771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.069 [2024-07-15 08:03:24.730780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.069 [2024-07-15 08:03:24.730787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.069 [2024-07-15 08:03:24.733618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.069 [2024-07-15 08:03:24.742975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.069 [2024-07-15 08:03:24.743347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.070 [2024-07-15 08:03:24.743365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.070 [2024-07-15 08:03:24.743373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.070 [2024-07-15 08:03:24.743552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.070 [2024-07-15 08:03:24.743729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.070 [2024-07-15 08:03:24.743739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.070 [2024-07-15 08:03:24.743746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.070 [2024-07-15 08:03:24.746574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.070 [2024-07-15 08:03:24.756075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.070 [2024-07-15 08:03:24.756489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.070 [2024-07-15 08:03:24.756506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.070 [2024-07-15 08:03:24.756514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.070 [2024-07-15 08:03:24.756692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.070 [2024-07-15 08:03:24.756870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.070 [2024-07-15 08:03:24.756880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.070 [2024-07-15 08:03:24.756887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.070 [2024-07-15 08:03:24.759710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.070 [2024-07-15 08:03:24.769266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.070 [2024-07-15 08:03:24.769708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.070 [2024-07-15 08:03:24.769729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.070 [2024-07-15 08:03:24.769736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.070 [2024-07-15 08:03:24.769914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.070 [2024-07-15 08:03:24.770092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.070 [2024-07-15 08:03:24.770102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.070 [2024-07-15 08:03:24.770108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.070 [2024-07-15 08:03:24.772941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.070 [2024-07-15 08:03:24.782479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.070 [2024-07-15 08:03:24.782840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.070 [2024-07-15 08:03:24.782856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.070 [2024-07-15 08:03:24.782864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.070 [2024-07-15 08:03:24.783042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.070 [2024-07-15 08:03:24.783220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.070 [2024-07-15 08:03:24.783237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.070 [2024-07-15 08:03:24.783243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.070 [2024-07-15 08:03:24.786069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.070 [2024-07-15 08:03:24.795604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.070 [2024-07-15 08:03:24.796051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.070 [2024-07-15 08:03:24.796094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.070 [2024-07-15 08:03:24.796116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.070 [2024-07-15 08:03:24.796554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.070 [2024-07-15 08:03:24.796733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.070 [2024-07-15 08:03:24.796743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.070 [2024-07-15 08:03:24.796751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.070 [2024-07-15 08:03:24.799580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.070 [2024-07-15 08:03:24.808761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.070 [2024-07-15 08:03:24.809193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.070 [2024-07-15 08:03:24.809211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.070 [2024-07-15 08:03:24.809218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.070 [2024-07-15 08:03:24.809403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.070 [2024-07-15 08:03:24.809585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.070 [2024-07-15 08:03:24.809595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.070 [2024-07-15 08:03:24.809601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.070 [2024-07-15 08:03:24.812431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.330 [2024-07-15 08:03:24.821937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.330 [2024-07-15 08:03:24.822352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.330 [2024-07-15 08:03:24.822369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.330 [2024-07-15 08:03:24.822377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.330 [2024-07-15 08:03:24.822549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.330 [2024-07-15 08:03:24.822721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.330 [2024-07-15 08:03:24.822731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.330 [2024-07-15 08:03:24.822737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.330 [2024-07-15 08:03:24.825533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.330 [2024-07-15 08:03:24.834848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.330 [2024-07-15 08:03:24.835259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.330 [2024-07-15 08:03:24.835276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.330 [2024-07-15 08:03:24.835283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.330 [2024-07-15 08:03:24.835446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.330 [2024-07-15 08:03:24.835609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.330 [2024-07-15 08:03:24.835618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.330 [2024-07-15 08:03:24.835624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.330 [2024-07-15 08:03:24.838304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.330 [2024-07-15 08:03:24.847777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.330 [2024-07-15 08:03:24.848200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.330 [2024-07-15 08:03:24.848217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.330 [2024-07-15 08:03:24.848229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.330 [2024-07-15 08:03:24.848416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.330 [2024-07-15 08:03:24.848589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.330 [2024-07-15 08:03:24.848598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.330 [2024-07-15 08:03:24.848605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.330 [2024-07-15 08:03:24.851263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.330 [2024-07-15 08:03:24.860599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.330 [2024-07-15 08:03:24.861039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.330 [2024-07-15 08:03:24.861081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.330 [2024-07-15 08:03:24.861105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.330 [2024-07-15 08:03:24.861615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.330 [2024-07-15 08:03:24.861780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.330 [2024-07-15 08:03:24.861789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.330 [2024-07-15 08:03:24.861795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.330 [2024-07-15 08:03:24.864387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.330 [2024-07-15 08:03:24.873413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.330 [2024-07-15 08:03:24.873820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.330 [2024-07-15 08:03:24.873853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.330 [2024-07-15 08:03:24.873878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.330 [2024-07-15 08:03:24.874443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.330 [2024-07-15 08:03:24.874607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.330 [2024-07-15 08:03:24.874617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.330 [2024-07-15 08:03:24.874622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.330 [2024-07-15 08:03:24.877209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.330 [2024-07-15 08:03:24.886244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.330 [2024-07-15 08:03:24.886664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.330 [2024-07-15 08:03:24.886719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.331 [2024-07-15 08:03:24.886741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.331 [2024-07-15 08:03:24.887334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.331 [2024-07-15 08:03:24.887786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.331 [2024-07-15 08:03:24.887795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.331 [2024-07-15 08:03:24.887801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.331 [2024-07-15 08:03:24.890392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.331 [2024-07-15 08:03:24.899105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.331 [2024-07-15 08:03:24.899543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.331 [2024-07-15 08:03:24.899585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.331 [2024-07-15 08:03:24.899615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.331 [2024-07-15 08:03:24.900199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.331 [2024-07-15 08:03:24.900733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.331 [2024-07-15 08:03:24.900742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.331 [2024-07-15 08:03:24.900748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.331 [2024-07-15 08:03:24.903340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.331 [2024-07-15 08:03:24.911901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.331 [2024-07-15 08:03:24.912322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.331 [2024-07-15 08:03:24.912338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.331 [2024-07-15 08:03:24.912345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.331 [2024-07-15 08:03:24.912508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.331 [2024-07-15 08:03:24.912671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.331 [2024-07-15 08:03:24.912680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.331 [2024-07-15 08:03:24.912686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.331 [2024-07-15 08:03:24.915281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.331 [2024-07-15 08:03:24.924777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.331 [2024-07-15 08:03:24.925199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.331 [2024-07-15 08:03:24.925251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.331 [2024-07-15 08:03:24.925276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.331 [2024-07-15 08:03:24.925858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.331 [2024-07-15 08:03:24.926022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.331 [2024-07-15 08:03:24.926031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.331 [2024-07-15 08:03:24.926037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.331 [2024-07-15 08:03:24.928662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.331 [2024-07-15 08:03:24.937703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.331 [2024-07-15 08:03:24.938044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.331 [2024-07-15 08:03:24.938061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.331 [2024-07-15 08:03:24.938068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.331 [2024-07-15 08:03:24.938246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.331 [2024-07-15 08:03:24.938432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.331 [2024-07-15 08:03:24.938445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.331 [2024-07-15 08:03:24.938451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.331 [2024-07-15 08:03:24.941081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.331 [2024-07-15 08:03:24.950567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.331 [2024-07-15 08:03:24.950914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.331 [2024-07-15 08:03:24.950930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.331 [2024-07-15 08:03:24.950937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.331 [2024-07-15 08:03:24.951098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.331 [2024-07-15 08:03:24.951267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.331 [2024-07-15 08:03:24.951277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.331 [2024-07-15 08:03:24.951283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.331 [2024-07-15 08:03:24.953966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.331 [2024-07-15 08:03:24.963617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.331 [2024-07-15 08:03:24.963991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.331 [2024-07-15 08:03:24.964008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.331 [2024-07-15 08:03:24.964015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.331 [2024-07-15 08:03:24.964179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.331 [2024-07-15 08:03:24.964346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.331 [2024-07-15 08:03:24.964355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.331 [2024-07-15 08:03:24.964362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.331 [2024-07-15 08:03:24.966951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.331 [2024-07-15 08:03:24.976430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.331 [2024-07-15 08:03:24.976784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.331 [2024-07-15 08:03:24.976800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.331 [2024-07-15 08:03:24.976807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.331 [2024-07-15 08:03:24.976969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.331 [2024-07-15 08:03:24.977132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.331 [2024-07-15 08:03:24.977141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.331 [2024-07-15 08:03:24.977148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.331 [2024-07-15 08:03:24.979741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.331 [2024-07-15 08:03:24.989228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.331 [2024-07-15 08:03:24.989654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.331 [2024-07-15 08:03:24.989670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.331 [2024-07-15 08:03:24.989677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.331 [2024-07-15 08:03:24.989841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.331 [2024-07-15 08:03:24.990003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.331 [2024-07-15 08:03:24.990012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.331 [2024-07-15 08:03:24.990018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.331 [2024-07-15 08:03:24.992614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.331 [2024-07-15 08:03:25.002101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.331 [2024-07-15 08:03:25.002544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.331 [2024-07-15 08:03:25.002588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.331 [2024-07-15 08:03:25.002610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.331 [2024-07-15 08:03:25.003002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.331 [2024-07-15 08:03:25.003166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.331 [2024-07-15 08:03:25.003175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.331 [2024-07-15 08:03:25.003181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.331 [2024-07-15 08:03:25.005773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.331 [2024-07-15 08:03:25.014944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.331 [2024-07-15 08:03:25.015297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.331 [2024-07-15 08:03:25.015313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.331 [2024-07-15 08:03:25.015320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.331 [2024-07-15 08:03:25.015483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.331 [2024-07-15 08:03:25.015646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.331 [2024-07-15 08:03:25.015655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.331 [2024-07-15 08:03:25.015661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.331 [2024-07-15 08:03:25.018258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.331 [2024-07-15 08:03:25.027742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.331 [2024-07-15 08:03:25.028143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.332 [2024-07-15 08:03:25.028159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.332 [2024-07-15 08:03:25.028167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.332 [2024-07-15 08:03:25.028338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.332 [2024-07-15 08:03:25.028501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.332 [2024-07-15 08:03:25.028511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.332 [2024-07-15 08:03:25.028517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.332 [2024-07-15 08:03:25.031103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.332 [2024-07-15 08:03:25.040691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.332 [2024-07-15 08:03:25.041129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.332 [2024-07-15 08:03:25.041171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.332 [2024-07-15 08:03:25.041194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.332 [2024-07-15 08:03:25.041762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.332 [2024-07-15 08:03:25.041926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.332 [2024-07-15 08:03:25.041935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.332 [2024-07-15 08:03:25.041941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.332 [2024-07-15 08:03:25.044534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.332 [2024-07-15 08:03:25.053561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.332 [2024-07-15 08:03:25.053921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.332 [2024-07-15 08:03:25.053963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.332 [2024-07-15 08:03:25.053985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.332 [2024-07-15 08:03:25.054575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.332 [2024-07-15 08:03:25.055083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.332 [2024-07-15 08:03:25.055092] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.332 [2024-07-15 08:03:25.055098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.332 [2024-07-15 08:03:25.057779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.332 [2024-07-15 08:03:25.066342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.332 [2024-07-15 08:03:25.066696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.332 [2024-07-15 08:03:25.066712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.332 [2024-07-15 08:03:25.066720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.332 [2024-07-15 08:03:25.066882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.332 [2024-07-15 08:03:25.067045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.332 [2024-07-15 08:03:25.067054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.332 [2024-07-15 08:03:25.067064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.332 [2024-07-15 08:03:25.069660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.332 [2024-07-15 08:03:25.079264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.332 [2024-07-15 08:03:25.079708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.332 [2024-07-15 08:03:25.079750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.332 [2024-07-15 08:03:25.079772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.332 [2024-07-15 08:03:25.080209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.332 [2024-07-15 08:03:25.080396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.332 [2024-07-15 08:03:25.080406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.332 [2024-07-15 08:03:25.080412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.591 [2024-07-15 08:03:25.083158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.591 [2024-07-15 08:03:25.092146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.591 [2024-07-15 08:03:25.092500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.591 [2024-07-15 08:03:25.092516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.591 [2024-07-15 08:03:25.092523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.591 [2024-07-15 08:03:25.092686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.591 [2024-07-15 08:03:25.092849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.591 [2024-07-15 08:03:25.092858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.591 [2024-07-15 08:03:25.092864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.591 [2024-07-15 08:03:25.095460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.591 [2024-07-15 08:03:25.104942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.591 [2024-07-15 08:03:25.105374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.591 [2024-07-15 08:03:25.105417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.591 [2024-07-15 08:03:25.105440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.591 [2024-07-15 08:03:25.105936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.591 [2024-07-15 08:03:25.106100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.591 [2024-07-15 08:03:25.106110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.591 [2024-07-15 08:03:25.106116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.591 [2024-07-15 08:03:25.108710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.591 [2024-07-15 08:03:25.117738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.591 [2024-07-15 08:03:25.118168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.591 [2024-07-15 08:03:25.118210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.591 [2024-07-15 08:03:25.118245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.591 [2024-07-15 08:03:25.118825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.591 [2024-07-15 08:03:25.119308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.591 [2024-07-15 08:03:25.119317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.591 [2024-07-15 08:03:25.119324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.591 [2024-07-15 08:03:25.121988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.591 [2024-07-15 08:03:25.130553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.591 [2024-07-15 08:03:25.130987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.591 [2024-07-15 08:03:25.131029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.591 [2024-07-15 08:03:25.131052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.591 [2024-07-15 08:03:25.131645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.591 [2024-07-15 08:03:25.132231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.591 [2024-07-15 08:03:25.132240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.591 [2024-07-15 08:03:25.132247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.591 [2024-07-15 08:03:25.137798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.591 [2024-07-15 08:03:25.145750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.591 [2024-07-15 08:03:25.146268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.591 [2024-07-15 08:03:25.146289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.591 [2024-07-15 08:03:25.146299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.591 [2024-07-15 08:03:25.146552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.591 [2024-07-15 08:03:25.146807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.591 [2024-07-15 08:03:25.146819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.591 [2024-07-15 08:03:25.146828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.591 [2024-07-15 08:03:25.150883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.591 [2024-07-15 08:03:25.158740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.591 [2024-07-15 08:03:25.159184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.591 [2024-07-15 08:03:25.159240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.591 [2024-07-15 08:03:25.159265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.591 [2024-07-15 08:03:25.159842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.591 [2024-07-15 08:03:25.160364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.591 [2024-07-15 08:03:25.160379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.591 [2024-07-15 08:03:25.160386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.591 [2024-07-15 08:03:25.163124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.591 [2024-07-15 08:03:25.171579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.591 [2024-07-15 08:03:25.172012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.591 [2024-07-15 08:03:25.172054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.591 [2024-07-15 08:03:25.172076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.591 [2024-07-15 08:03:25.172537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.591 [2024-07-15 08:03:25.172703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.591 [2024-07-15 08:03:25.172712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.591 [2024-07-15 08:03:25.172718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.591 [2024-07-15 08:03:25.175402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.591 [2024-07-15 08:03:25.184434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.591 [2024-07-15 08:03:25.184859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.591 [2024-07-15 08:03:25.184875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.591 [2024-07-15 08:03:25.184882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.591 [2024-07-15 08:03:25.185045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.591 [2024-07-15 08:03:25.185208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.591 [2024-07-15 08:03:25.185216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.591 [2024-07-15 08:03:25.185222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.591 [2024-07-15 08:03:25.187817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.591 [2024-07-15 08:03:25.197237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.591 [2024-07-15 08:03:25.197637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.591 [2024-07-15 08:03:25.197653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.591 [2024-07-15 08:03:25.197660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.591 [2024-07-15 08:03:25.197822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.591 [2024-07-15 08:03:25.197987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.591 [2024-07-15 08:03:25.197995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.591 [2024-07-15 08:03:25.198001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.591 [2024-07-15 08:03:25.200599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.591 [2024-07-15 08:03:25.210079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.591 [2024-07-15 08:03:25.210511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.591 [2024-07-15 08:03:25.210554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.591 [2024-07-15 08:03:25.210577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.591 [2024-07-15 08:03:25.211155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.591 [2024-07-15 08:03:25.211352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.591 [2024-07-15 08:03:25.211362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.591 [2024-07-15 08:03:25.211368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.591 [2024-07-15 08:03:25.214121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.591 [2024-07-15 08:03:25.222946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.591 [2024-07-15 08:03:25.223299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.591 [2024-07-15 08:03:25.223316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.591 [2024-07-15 08:03:25.223323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.591 [2024-07-15 08:03:25.223486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.591 [2024-07-15 08:03:25.223649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.591 [2024-07-15 08:03:25.223658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.591 [2024-07-15 08:03:25.223664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.591 [2024-07-15 08:03:25.226254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.591 [2024-07-15 08:03:25.235771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.591 [2024-07-15 08:03:25.236122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.591 [2024-07-15 08:03:25.236139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.591 [2024-07-15 08:03:25.236146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.591 [2024-07-15 08:03:25.236316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.591 [2024-07-15 08:03:25.236480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.591 [2024-07-15 08:03:25.236489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.591 [2024-07-15 08:03:25.236495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.591 [2024-07-15 08:03:25.239185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.591 [2024-07-15 08:03:25.248577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.591 [2024-07-15 08:03:25.248961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.591 [2024-07-15 08:03:25.248981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.591 [2024-07-15 08:03:25.248988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.591 [2024-07-15 08:03:25.249150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.591 [2024-07-15 08:03:25.249319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.591 [2024-07-15 08:03:25.249328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.591 [2024-07-15 08:03:25.249335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.591 [2024-07-15 08:03:25.251927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.591 [2024-07-15 08:03:25.261406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.591 [2024-07-15 08:03:25.261826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.591 [2024-07-15 08:03:25.261880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.591 [2024-07-15 08:03:25.261902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.591 [2024-07-15 08:03:25.262423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.591 [2024-07-15 08:03:25.262588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.591 [2024-07-15 08:03:25.262598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.591 [2024-07-15 08:03:25.262604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.591 [2024-07-15 08:03:25.265266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.591 [2024-07-15 08:03:25.274323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.591 [2024-07-15 08:03:25.274754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.591 [2024-07-15 08:03:25.274796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.591 [2024-07-15 08:03:25.274819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.591 [2024-07-15 08:03:25.275414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.591 [2024-07-15 08:03:25.275819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.591 [2024-07-15 08:03:25.275829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.591 [2024-07-15 08:03:25.275835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.591 [2024-07-15 08:03:25.278424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.591 [2024-07-15 08:03:25.287139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.591 [2024-07-15 08:03:25.287489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.591 [2024-07-15 08:03:25.287506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.591 [2024-07-15 08:03:25.287513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.591 [2024-07-15 08:03:25.287675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.591 [2024-07-15 08:03:25.287843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.591 [2024-07-15 08:03:25.287852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.591 [2024-07-15 08:03:25.287858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.591 [2024-07-15 08:03:25.290549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.591 [2024-07-15 08:03:25.300023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.591 [2024-07-15 08:03:25.300378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.591 [2024-07-15 08:03:25.300394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.591 [2024-07-15 08:03:25.300401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.591 [2024-07-15 08:03:25.300564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.591 [2024-07-15 08:03:25.300727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.591 [2024-07-15 08:03:25.300736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.591 [2024-07-15 08:03:25.300742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.591 [2024-07-15 08:03:25.303339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.591 [2024-07-15 08:03:25.312820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.591 [2024-07-15 08:03:25.313242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.591 [2024-07-15 08:03:25.313258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.591 [2024-07-15 08:03:25.313265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.591 [2024-07-15 08:03:25.313427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.591 [2024-07-15 08:03:25.313590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.591 [2024-07-15 08:03:25.313599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.591 [2024-07-15 08:03:25.313605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.591 [2024-07-15 08:03:25.316195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.591 [2024-07-15 08:03:25.325680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.591 [2024-07-15 08:03:25.326081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.591 [2024-07-15 08:03:25.326097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.591 [2024-07-15 08:03:25.326103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.591 [2024-07-15 08:03:25.326272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.591 [2024-07-15 08:03:25.326435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.592 [2024-07-15 08:03:25.326444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.592 [2024-07-15 08:03:25.326450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.592 [2024-07-15 08:03:25.329038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.592 [2024-07-15 08:03:25.338573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.592 [2024-07-15 08:03:25.338938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.592 [2024-07-15 08:03:25.338965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.592 [2024-07-15 08:03:25.338973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.592 [2024-07-15 08:03:25.339144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.592 [2024-07-15 08:03:25.339322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.592 [2024-07-15 08:03:25.339332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.592 [2024-07-15 08:03:25.339338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.851 [2024-07-15 08:03:25.342084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.851 [2024-07-15 08:03:25.351449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.851 [2024-07-15 08:03:25.351803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.851 [2024-07-15 08:03:25.351819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.851 [2024-07-15 08:03:25.351826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.851 [2024-07-15 08:03:25.351989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.851 [2024-07-15 08:03:25.352152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.851 [2024-07-15 08:03:25.352160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.851 [2024-07-15 08:03:25.352166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.851 [2024-07-15 08:03:25.354755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.851 [2024-07-15 08:03:25.364233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.851 [2024-07-15 08:03:25.364669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.851 [2024-07-15 08:03:25.364711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.851 [2024-07-15 08:03:25.364733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.851 [2024-07-15 08:03:25.365325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.851 [2024-07-15 08:03:25.365852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.851 [2024-07-15 08:03:25.365861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.851 [2024-07-15 08:03:25.365867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.851 [2024-07-15 08:03:25.368486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.851 [2024-07-15 08:03:25.377119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.851 [2024-07-15 08:03:25.377549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.851 [2024-07-15 08:03:25.377565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.851 [2024-07-15 08:03:25.377575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.851 [2024-07-15 08:03:25.377738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.851 [2024-07-15 08:03:25.377901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.851 [2024-07-15 08:03:25.377910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.851 [2024-07-15 08:03:25.377916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.851 [2024-07-15 08:03:25.380517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.851 [2024-07-15 08:03:25.390002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.851 [2024-07-15 08:03:25.390364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.851 [2024-07-15 08:03:25.390407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.851 [2024-07-15 08:03:25.390429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.851 [2024-07-15 08:03:25.390927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.851 [2024-07-15 08:03:25.391091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.851 [2024-07-15 08:03:25.391100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.851 [2024-07-15 08:03:25.391106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.851 [2024-07-15 08:03:25.393701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.851 [2024-07-15 08:03:25.402883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.851 [2024-07-15 08:03:25.403234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.851 [2024-07-15 08:03:25.403250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.851 [2024-07-15 08:03:25.403257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.851 [2024-07-15 08:03:25.403419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.851 [2024-07-15 08:03:25.403581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.851 [2024-07-15 08:03:25.403590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.851 [2024-07-15 08:03:25.403596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.851 [2024-07-15 08:03:25.406187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.851 [2024-07-15 08:03:25.415679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.851 [2024-07-15 08:03:25.416104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.851 [2024-07-15 08:03:25.416120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.851 [2024-07-15 08:03:25.416127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.851 [2024-07-15 08:03:25.416293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.851 [2024-07-15 08:03:25.416457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.851 [2024-07-15 08:03:25.416469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.851 [2024-07-15 08:03:25.416475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.851 [2024-07-15 08:03:25.419065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.851 [2024-07-15 08:03:25.428555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.851 [2024-07-15 08:03:25.428975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.851 [2024-07-15 08:03:25.429030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.851 [2024-07-15 08:03:25.429052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.851 [2024-07-15 08:03:25.429648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.852 [2024-07-15 08:03:25.429851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.852 [2024-07-15 08:03:25.429859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.852 [2024-07-15 08:03:25.429866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.852 [2024-07-15 08:03:25.432487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.852 [2024-07-15 08:03:25.441476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.852 [2024-07-15 08:03:25.441836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.852 [2024-07-15 08:03:25.441853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.852 [2024-07-15 08:03:25.441861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.852 [2024-07-15 08:03:25.442024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.852 [2024-07-15 08:03:25.442186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.852 [2024-07-15 08:03:25.442194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.852 [2024-07-15 08:03:25.442200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.852 [2024-07-15 08:03:25.444863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.852 [2024-07-15 08:03:25.454379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.852 [2024-07-15 08:03:25.454778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.852 [2024-07-15 08:03:25.454793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.852 [2024-07-15 08:03:25.454800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.852 [2024-07-15 08:03:25.454963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.852 [2024-07-15 08:03:25.455126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.852 [2024-07-15 08:03:25.455135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.852 [2024-07-15 08:03:25.455141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.852 [2024-07-15 08:03:25.457744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.852 [2024-07-15 08:03:25.467236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.852 [2024-07-15 08:03:25.467624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.852 [2024-07-15 08:03:25.467640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.852 [2024-07-15 08:03:25.467647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.852 [2024-07-15 08:03:25.467809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.852 [2024-07-15 08:03:25.467971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.852 [2024-07-15 08:03:25.467980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.852 [2024-07-15 08:03:25.467986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.852 [2024-07-15 08:03:25.470764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.852 [2024-07-15 08:03:25.480085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.852 [2024-07-15 08:03:25.480495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.852 [2024-07-15 08:03:25.480511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.852 [2024-07-15 08:03:25.480519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.852 [2024-07-15 08:03:25.480682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.852 [2024-07-15 08:03:25.480845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.852 [2024-07-15 08:03:25.480854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.852 [2024-07-15 08:03:25.480860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.852 [2024-07-15 08:03:25.483453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.852 [2024-07-15 08:03:25.492991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.852 [2024-07-15 08:03:25.493278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.852 [2024-07-15 08:03:25.493296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.852 [2024-07-15 08:03:25.493303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.852 [2024-07-15 08:03:25.493467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.852 [2024-07-15 08:03:25.493630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.852 [2024-07-15 08:03:25.493639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.852 [2024-07-15 08:03:25.493646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.852 [2024-07-15 08:03:25.496237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.852 [2024-07-15 08:03:25.505883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.852 [2024-07-15 08:03:25.506242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.852 [2024-07-15 08:03:25.506286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.852 [2024-07-15 08:03:25.506308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.852 [2024-07-15 08:03:25.506890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.852 [2024-07-15 08:03:25.507054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.852 [2024-07-15 08:03:25.507063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.852 [2024-07-15 08:03:25.507069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.852 [2024-07-15 08:03:25.509694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.852 [2024-07-15 08:03:25.518792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.852 [2024-07-15 08:03:25.519243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.852 [2024-07-15 08:03:25.519285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.852 [2024-07-15 08:03:25.519307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.852 [2024-07-15 08:03:25.519886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.852 [2024-07-15 08:03:25.520087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.852 [2024-07-15 08:03:25.520096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.852 [2024-07-15 08:03:25.520102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.852 [2024-07-15 08:03:25.522694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.852 [2024-07-15 08:03:25.531589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.852 [2024-07-15 08:03:25.532015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.852 [2024-07-15 08:03:25.532032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.852 [2024-07-15 08:03:25.532040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.852 [2024-07-15 08:03:25.532201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.852 [2024-07-15 08:03:25.532370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.852 [2024-07-15 08:03:25.532380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.852 [2024-07-15 08:03:25.532386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.852 [2024-07-15 08:03:25.534974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.852 [2024-07-15 08:03:25.544486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.852 [2024-07-15 08:03:25.544806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.852 [2024-07-15 08:03:25.544823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.852 [2024-07-15 08:03:25.544831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.852 [2024-07-15 08:03:25.544993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.852 [2024-07-15 08:03:25.545157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.852 [2024-07-15 08:03:25.545166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.852 [2024-07-15 08:03:25.545176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.852 [2024-07-15 08:03:25.547769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.852 [2024-07-15 08:03:25.557413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.852 [2024-07-15 08:03:25.557714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.852 [2024-07-15 08:03:25.557757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.852 [2024-07-15 08:03:25.557780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.852 [2024-07-15 08:03:25.558295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.852 [2024-07-15 08:03:25.558459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.852 [2024-07-15 08:03:25.558468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.852 [2024-07-15 08:03:25.558474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.852 [2024-07-15 08:03:25.561152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.852 [2024-07-15 08:03:25.570337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.852 [2024-07-15 08:03:25.570743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.852 [2024-07-15 08:03:25.570759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.852 [2024-07-15 08:03:25.570766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.853 [2024-07-15 08:03:25.570929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.853 [2024-07-15 08:03:25.571091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.853 [2024-07-15 08:03:25.571100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.853 [2024-07-15 08:03:25.571106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.853 [2024-07-15 08:03:25.573703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.853 [2024-07-15 08:03:25.583183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.853 [2024-07-15 08:03:25.583620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.853 [2024-07-15 08:03:25.583663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.853 [2024-07-15 08:03:25.583686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.853 [2024-07-15 08:03:25.584133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.853 [2024-07-15 08:03:25.584302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.853 [2024-07-15 08:03:25.584311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.853 [2024-07-15 08:03:25.584317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.853 [2024-07-15 08:03:25.586907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.853 [2024-07-15 08:03:25.596087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.853 [2024-07-15 08:03:25.596515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.853 [2024-07-15 08:03:25.596560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:40.853 [2024-07-15 08:03:25.596583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:40.853 [2024-07-15 08:03:25.597164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:40.853 [2024-07-15 08:03:25.597576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.853 [2024-07-15 08:03:25.597595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.853 [2024-07-15 08:03:25.597609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.114 [2024-07-15 08:03:25.603845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.114 [2024-07-15 08:03:25.611079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.114 [2024-07-15 08:03:25.611535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.114 [2024-07-15 08:03:25.611557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.114 [2024-07-15 08:03:25.611568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.114 [2024-07-15 08:03:25.611822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.114 [2024-07-15 08:03:25.612076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.114 [2024-07-15 08:03:25.612089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.114 [2024-07-15 08:03:25.612098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.114 [2024-07-15 08:03:25.616152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.114 [2024-07-15 08:03:25.624028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.114 [2024-07-15 08:03:25.624464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.114 [2024-07-15 08:03:25.624480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.114 [2024-07-15 08:03:25.624487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.114 [2024-07-15 08:03:25.624659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.114 [2024-07-15 08:03:25.624830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.114 [2024-07-15 08:03:25.624839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.114 [2024-07-15 08:03:25.624846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.114 [2024-07-15 08:03:25.627594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.114 [2024-07-15 08:03:25.636924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.114 [2024-07-15 08:03:25.637356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.114 [2024-07-15 08:03:25.637399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.114 [2024-07-15 08:03:25.637421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.114 [2024-07-15 08:03:25.637999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.114 [2024-07-15 08:03:25.638542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.114 [2024-07-15 08:03:25.638552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.114 [2024-07-15 08:03:25.638558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.114 [2024-07-15 08:03:25.641305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.114 [2024-07-15 08:03:25.649726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.114 [2024-07-15 08:03:25.650087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.114 [2024-07-15 08:03:25.650104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.114 [2024-07-15 08:03:25.650111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.114 [2024-07-15 08:03:25.650279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.114 [2024-07-15 08:03:25.650443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.114 [2024-07-15 08:03:25.650452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.114 [2024-07-15 08:03:25.650457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.114 [2024-07-15 08:03:25.653047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.114 [2024-07-15 08:03:25.662536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.114 [2024-07-15 08:03:25.662958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.114 [2024-07-15 08:03:25.662975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.114 [2024-07-15 08:03:25.662982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.114 [2024-07-15 08:03:25.663144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.114 [2024-07-15 08:03:25.663311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.114 [2024-07-15 08:03:25.663321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.114 [2024-07-15 08:03:25.663327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.114 [2024-07-15 08:03:25.666016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.114 [2024-07-15 08:03:25.675355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.114 [2024-07-15 08:03:25.675705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.114 [2024-07-15 08:03:25.675720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.114 [2024-07-15 08:03:25.675727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.114 [2024-07-15 08:03:25.675888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.114 [2024-07-15 08:03:25.676051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.114 [2024-07-15 08:03:25.676060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.114 [2024-07-15 08:03:25.676066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.114 [2024-07-15 08:03:25.678665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.114 [2024-07-15 08:03:25.688187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.114 [2024-07-15 08:03:25.688545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.114 [2024-07-15 08:03:25.688561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.115 [2024-07-15 08:03:25.688568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.115 [2024-07-15 08:03:25.688732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.115 [2024-07-15 08:03:25.688895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.115 [2024-07-15 08:03:25.688904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.115 [2024-07-15 08:03:25.688910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.115 [2024-07-15 08:03:25.691501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.115 [2024-07-15 08:03:25.700980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.115 [2024-07-15 08:03:25.701401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.115 [2024-07-15 08:03:25.701445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.115 [2024-07-15 08:03:25.701467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.115 [2024-07-15 08:03:25.702045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.115 [2024-07-15 08:03:25.702641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.115 [2024-07-15 08:03:25.702650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.115 [2024-07-15 08:03:25.702656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.115 [2024-07-15 08:03:25.705341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.115 [2024-07-15 08:03:25.713909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.115 [2024-07-15 08:03:25.714330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.115 [2024-07-15 08:03:25.714345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.115 [2024-07-15 08:03:25.714352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.115 [2024-07-15 08:03:25.714514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.115 [2024-07-15 08:03:25.714677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.115 [2024-07-15 08:03:25.714686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.115 [2024-07-15 08:03:25.714692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.115 [2024-07-15 08:03:25.717286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.115 [2024-07-15 08:03:25.726760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.115 [2024-07-15 08:03:25.727233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.115 [2024-07-15 08:03:25.727249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.115 [2024-07-15 08:03:25.727260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.115 [2024-07-15 08:03:25.727422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.115 [2024-07-15 08:03:25.727586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.115 [2024-07-15 08:03:25.727595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.115 [2024-07-15 08:03:25.727600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.115 [2024-07-15 08:03:25.730319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.115 [2024-07-15 08:03:25.739659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.115 [2024-07-15 08:03:25.739976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.115 [2024-07-15 08:03:25.739992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.115 [2024-07-15 08:03:25.739999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.115 [2024-07-15 08:03:25.740162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.115 [2024-07-15 08:03:25.740350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.115 [2024-07-15 08:03:25.740360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.115 [2024-07-15 08:03:25.740366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.115 [2024-07-15 08:03:25.743015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.115 [2024-07-15 08:03:25.752511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.115 [2024-07-15 08:03:25.752943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.115 [2024-07-15 08:03:25.752986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.115 [2024-07-15 08:03:25.753008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.115 [2024-07-15 08:03:25.753565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.115 [2024-07-15 08:03:25.753730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.115 [2024-07-15 08:03:25.753739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.115 [2024-07-15 08:03:25.753745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.115 [2024-07-15 08:03:25.756333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.115 [2024-07-15 08:03:25.765345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.115 [2024-07-15 08:03:25.765767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.115 [2024-07-15 08:03:25.765815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.115 [2024-07-15 08:03:25.765837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.115 [2024-07-15 08:03:25.766430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.115 [2024-07-15 08:03:25.766655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.115 [2024-07-15 08:03:25.766666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.115 [2024-07-15 08:03:25.766672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.115 [2024-07-15 08:03:25.769262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.115 [2024-07-15 08:03:25.778134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.115 [2024-07-15 08:03:25.778577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.115 [2024-07-15 08:03:25.778620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.115 [2024-07-15 08:03:25.778642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.115 [2024-07-15 08:03:25.779221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.115 [2024-07-15 08:03:25.779818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.115 [2024-07-15 08:03:25.779844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.115 [2024-07-15 08:03:25.779873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.115 [2024-07-15 08:03:25.786103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.115 [2024-07-15 08:03:25.793165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.115 [2024-07-15 08:03:25.793858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.115 [2024-07-15 08:03:25.793880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.115 [2024-07-15 08:03:25.793890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.115 [2024-07-15 08:03:25.794144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.115 [2024-07-15 08:03:25.794405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.115 [2024-07-15 08:03:25.794419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.115 [2024-07-15 08:03:25.794429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.115 [2024-07-15 08:03:25.798483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.115 [2024-07-15 08:03:25.806218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.115 [2024-07-15 08:03:25.806588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.115 [2024-07-15 08:03:25.806629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.115 [2024-07-15 08:03:25.806651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.115 [2024-07-15 08:03:25.807244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.115 [2024-07-15 08:03:25.807826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.115 [2024-07-15 08:03:25.807851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.115 [2024-07-15 08:03:25.807880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.115 [2024-07-15 08:03:25.810628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.115 [2024-07-15 08:03:25.819353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.115 [2024-07-15 08:03:25.819767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.115 [2024-07-15 08:03:25.819784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.115 [2024-07-15 08:03:25.819792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.115 [2024-07-15 08:03:25.819969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.115 [2024-07-15 08:03:25.820146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.115 [2024-07-15 08:03:25.820155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.115 [2024-07-15 08:03:25.820161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.115 [2024-07-15 08:03:25.822988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.115 [2024-07-15 08:03:25.832499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.115 [2024-07-15 08:03:25.832939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.116 [2024-07-15 08:03:25.832956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.116 [2024-07-15 08:03:25.832963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.116 [2024-07-15 08:03:25.833140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.116 [2024-07-15 08:03:25.833322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.116 [2024-07-15 08:03:25.833332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.116 [2024-07-15 08:03:25.833339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.116 [2024-07-15 08:03:25.836159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.116 [2024-07-15 08:03:25.845679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.116 [2024-07-15 08:03:25.846050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.116 [2024-07-15 08:03:25.846068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.116 [2024-07-15 08:03:25.846076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.116 [2024-07-15 08:03:25.846259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.116 [2024-07-15 08:03:25.846436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.116 [2024-07-15 08:03:25.846446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.116 [2024-07-15 08:03:25.846452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.116 [2024-07-15 08:03:25.849279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.116 [2024-07-15 08:03:25.858887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.116 [2024-07-15 08:03:25.859321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.116 [2024-07-15 08:03:25.859338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.116 [2024-07-15 08:03:25.859349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.116 [2024-07-15 08:03:25.859526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.116 [2024-07-15 08:03:25.859705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.116 [2024-07-15 08:03:25.859714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.116 [2024-07-15 08:03:25.859721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.116 [2024-07-15 08:03:25.862551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.376 [2024-07-15 08:03:25.871933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.376 [2024-07-15 08:03:25.872306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.376 [2024-07-15 08:03:25.872323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.376 [2024-07-15 08:03:25.872331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.376 [2024-07-15 08:03:25.872510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.376 [2024-07-15 08:03:25.872689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.376 [2024-07-15 08:03:25.872698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.376 [2024-07-15 08:03:25.872705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.376 [2024-07-15 08:03:25.875533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.376 [2024-07-15 08:03:25.885051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.376 [2024-07-15 08:03:25.885496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.376 [2024-07-15 08:03:25.885513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.376 [2024-07-15 08:03:25.885521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.376 [2024-07-15 08:03:25.885698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.376 [2024-07-15 08:03:25.885875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.376 [2024-07-15 08:03:25.885885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.376 [2024-07-15 08:03:25.885892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.376 [2024-07-15 08:03:25.888718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.376 [2024-07-15 08:03:25.898240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.376 [2024-07-15 08:03:25.898678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.376 [2024-07-15 08:03:25.898696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.376 [2024-07-15 08:03:25.898703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.376 [2024-07-15 08:03:25.898881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.376 [2024-07-15 08:03:25.899059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.376 [2024-07-15 08:03:25.899073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.376 [2024-07-15 08:03:25.899079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.376 [2024-07-15 08:03:25.901909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.376 [2024-07-15 08:03:25.911431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.376 [2024-07-15 08:03:25.911848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.376 [2024-07-15 08:03:25.911865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.376 [2024-07-15 08:03:25.911873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.376 [2024-07-15 08:03:25.912051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.376 [2024-07-15 08:03:25.912236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.376 [2024-07-15 08:03:25.912246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.376 [2024-07-15 08:03:25.912253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.376 [2024-07-15 08:03:25.915081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.376 [2024-07-15 08:03:25.924601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.376 [2024-07-15 08:03:25.925014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.376 [2024-07-15 08:03:25.925032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.376 [2024-07-15 08:03:25.925039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.376 [2024-07-15 08:03:25.925217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.376 [2024-07-15 08:03:25.925402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.376 [2024-07-15 08:03:25.925412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.376 [2024-07-15 08:03:25.925419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.376 [2024-07-15 08:03:25.928244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.376 [2024-07-15 08:03:25.937757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.376 [2024-07-15 08:03:25.938196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.376 [2024-07-15 08:03:25.938213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.376 [2024-07-15 08:03:25.938221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.376 [2024-07-15 08:03:25.938404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.376 [2024-07-15 08:03:25.938582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.376 [2024-07-15 08:03:25.938592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.376 [2024-07-15 08:03:25.938599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.376 [2024-07-15 08:03:25.941430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.376 [2024-07-15 08:03:25.950949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.376 [2024-07-15 08:03:25.951377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.376 [2024-07-15 08:03:25.951395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.376 [2024-07-15 08:03:25.951402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.376 [2024-07-15 08:03:25.951579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.376 [2024-07-15 08:03:25.951758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.376 [2024-07-15 08:03:25.951768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.376 [2024-07-15 08:03:25.951774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.376 [2024-07-15 08:03:25.954600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.377 [2024-07-15 08:03:25.964119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.377 [2024-07-15 08:03:25.964560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.377 [2024-07-15 08:03:25.964578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.377 [2024-07-15 08:03:25.964586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.377 [2024-07-15 08:03:25.964762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.377 [2024-07-15 08:03:25.964940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.377 [2024-07-15 08:03:25.964949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.377 [2024-07-15 08:03:25.964956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.377 [2024-07-15 08:03:25.967782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.377 [2024-07-15 08:03:25.977161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.377 [2024-07-15 08:03:25.977516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.377 [2024-07-15 08:03:25.977533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.377 [2024-07-15 08:03:25.977541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.377 [2024-07-15 08:03:25.977719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.377 [2024-07-15 08:03:25.977899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.377 [2024-07-15 08:03:25.977908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.377 [2024-07-15 08:03:25.977915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.377 [2024-07-15 08:03:25.980748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.377 [2024-07-15 08:03:25.990343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.377 [2024-07-15 08:03:25.990780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.377 [2024-07-15 08:03:25.990797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.377 [2024-07-15 08:03:25.990804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.377 [2024-07-15 08:03:25.990986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.377 [2024-07-15 08:03:25.991165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.377 [2024-07-15 08:03:25.991174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.377 [2024-07-15 08:03:25.991180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.377 [2024-07-15 08:03:25.994014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.377 [2024-07-15 08:03:26.003542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.377 [2024-07-15 08:03:26.003916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.377 [2024-07-15 08:03:26.003933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.377 [2024-07-15 08:03:26.003941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.377 [2024-07-15 08:03:26.004118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.377 [2024-07-15 08:03:26.004304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.377 [2024-07-15 08:03:26.004313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.377 [2024-07-15 08:03:26.004320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.377 [2024-07-15 08:03:26.007149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.377 [2024-07-15 08:03:26.016675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.377 [2024-07-15 08:03:26.017115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.377 [2024-07-15 08:03:26.017132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.377 [2024-07-15 08:03:26.017140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.377 [2024-07-15 08:03:26.017324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.377 [2024-07-15 08:03:26.017502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.377 [2024-07-15 08:03:26.017512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.377 [2024-07-15 08:03:26.017518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.377 [2024-07-15 08:03:26.020345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.377 [2024-07-15 08:03:26.029863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.377 [2024-07-15 08:03:26.030221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.377 [2024-07-15 08:03:26.030244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.377 [2024-07-15 08:03:26.030251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.377 [2024-07-15 08:03:26.030431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.377 [2024-07-15 08:03:26.030609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.377 [2024-07-15 08:03:26.030619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.377 [2024-07-15 08:03:26.030629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.377 [2024-07-15 08:03:26.033457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.377 [2024-07-15 08:03:26.042978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.377 [2024-07-15 08:03:26.043345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.377 [2024-07-15 08:03:26.043363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.377 [2024-07-15 08:03:26.043371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.377 [2024-07-15 08:03:26.043548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.377 [2024-07-15 08:03:26.043726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.377 [2024-07-15 08:03:26.043736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.377 [2024-07-15 08:03:26.043743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.377 [2024-07-15 08:03:26.046571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.377 [2024-07-15 08:03:26.056093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.377 [2024-07-15 08:03:26.056534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.377 [2024-07-15 08:03:26.056551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.377 [2024-07-15 08:03:26.056559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.377 [2024-07-15 08:03:26.056737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.377 [2024-07-15 08:03:26.056915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.377 [2024-07-15 08:03:26.056924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.377 [2024-07-15 08:03:26.056931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.377 [2024-07-15 08:03:26.059760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.377 [2024-07-15 08:03:26.069222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.377 [2024-07-15 08:03:26.069663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.377 [2024-07-15 08:03:26.069681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.377 [2024-07-15 08:03:26.069689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.377 [2024-07-15 08:03:26.069871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.377 [2024-07-15 08:03:26.070053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.377 [2024-07-15 08:03:26.070063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.377 [2024-07-15 08:03:26.070070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.377 [2024-07-15 08:03:26.072920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.377 [2024-07-15 08:03:26.082342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.377 [2024-07-15 08:03:26.082778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.377 [2024-07-15 08:03:26.082798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.377 [2024-07-15 08:03:26.082805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.377 [2024-07-15 08:03:26.082983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.377 [2024-07-15 08:03:26.083161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.377 [2024-07-15 08:03:26.083171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.377 [2024-07-15 08:03:26.083177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.377 [2024-07-15 08:03:26.086006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.377 [2024-07-15 08:03:26.095523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.377 [2024-07-15 08:03:26.095957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.377 [2024-07-15 08:03:26.095974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.377 [2024-07-15 08:03:26.095981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.377 [2024-07-15 08:03:26.096158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.377 [2024-07-15 08:03:26.096341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.377 [2024-07-15 08:03:26.096359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.377 [2024-07-15 08:03:26.096366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.377 [2024-07-15 08:03:26.099190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.377 [2024-07-15 08:03:26.108567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.377 [2024-07-15 08:03:26.108940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.377 [2024-07-15 08:03:26.108955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.377 [2024-07-15 08:03:26.108963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.377 [2024-07-15 08:03:26.109140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.377 [2024-07-15 08:03:26.109323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.377 [2024-07-15 08:03:26.109332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.377 [2024-07-15 08:03:26.109338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.377 [2024-07-15 08:03:26.112165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.377 [2024-07-15 08:03:26.121630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.377 [2024-07-15 08:03:26.122096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.377 [2024-07-15 08:03:26.122137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.377 [2024-07-15 08:03:26.122158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.377 [2024-07-15 08:03:26.122688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.377 [2024-07-15 08:03:26.122871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.377 [2024-07-15 08:03:26.122879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.377 [2024-07-15 08:03:26.122885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.377 [2024-07-15 08:03:26.125743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.637 [2024-07-15 08:03:26.134722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.637 [2024-07-15 08:03:26.135145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.637 [2024-07-15 08:03:26.135160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.637 [2024-07-15 08:03:26.135167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.637 [2024-07-15 08:03:26.135356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.637 [2024-07-15 08:03:26.135529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.637 [2024-07-15 08:03:26.135537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.637 [2024-07-15 08:03:26.135543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.637 [2024-07-15 08:03:26.138197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.637 [2024-07-15 08:03:26.147621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.637 [2024-07-15 08:03:26.148078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.637 [2024-07-15 08:03:26.148119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.637 [2024-07-15 08:03:26.148141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.637 [2024-07-15 08:03:26.148712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.637 [2024-07-15 08:03:26.148885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.637 [2024-07-15 08:03:26.148893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.637 [2024-07-15 08:03:26.148900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.637 [2024-07-15 08:03:26.151540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.637 [2024-07-15 08:03:26.160439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.637 [2024-07-15 08:03:26.160778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.637 [2024-07-15 08:03:26.160818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.637 [2024-07-15 08:03:26.160841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.637 [2024-07-15 08:03:26.161433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.637 [2024-07-15 08:03:26.161885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.637 [2024-07-15 08:03:26.161893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.637 [2024-07-15 08:03:26.161899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.637 [2024-07-15 08:03:26.164503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.637 [2024-07-15 08:03:26.173350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.637 [2024-07-15 08:03:26.173682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.637 [2024-07-15 08:03:26.173697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.637 [2024-07-15 08:03:26.173703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.637 [2024-07-15 08:03:26.173865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.637 [2024-07-15 08:03:26.174027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.637 [2024-07-15 08:03:26.174035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.637 [2024-07-15 08:03:26.174040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.637 [2024-07-15 08:03:26.176639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.637 [2024-07-15 08:03:26.186153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.637 [2024-07-15 08:03:26.186532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.637 [2024-07-15 08:03:26.186548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.637 [2024-07-15 08:03:26.186555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.637 [2024-07-15 08:03:26.186726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.637 [2024-07-15 08:03:26.186897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.637 [2024-07-15 08:03:26.186904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.637 [2024-07-15 08:03:26.186910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.637 [2024-07-15 08:03:26.189551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.637 [2024-07-15 08:03:26.199003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.638 [2024-07-15 08:03:26.199439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.638 [2024-07-15 08:03:26.199482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.638 [2024-07-15 08:03:26.199503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.638 [2024-07-15 08:03:26.199935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.638 [2024-07-15 08:03:26.200098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.638 [2024-07-15 08:03:26.200105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.638 [2024-07-15 08:03:26.200111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.638 [2024-07-15 08:03:26.202856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.638 [2024-07-15 08:03:26.211848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.638 [2024-07-15 08:03:26.212295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.638 [2024-07-15 08:03:26.212339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.638 [2024-07-15 08:03:26.212368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.638 [2024-07-15 08:03:26.212934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.638 [2024-07-15 08:03:26.213098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.638 [2024-07-15 08:03:26.213106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.638 [2024-07-15 08:03:26.213112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.638 [2024-07-15 08:03:26.215803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.638 [2024-07-15 08:03:26.224694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.638 [2024-07-15 08:03:26.225099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.638 [2024-07-15 08:03:26.225113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.638 [2024-07-15 08:03:26.225120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.638 [2024-07-15 08:03:26.225305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.638 [2024-07-15 08:03:26.225478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.638 [2024-07-15 08:03:26.225486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.638 [2024-07-15 08:03:26.225492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.638 [2024-07-15 08:03:26.228146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.638 [2024-07-15 08:03:26.237865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.638 [2024-07-15 08:03:26.238312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.638 [2024-07-15 08:03:26.238368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.638 [2024-07-15 08:03:26.238391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.638 [2024-07-15 08:03:26.238925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.638 [2024-07-15 08:03:26.239112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.638 [2024-07-15 08:03:26.239120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.638 [2024-07-15 08:03:26.239126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.638 [2024-07-15 08:03:26.241873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.638 [2024-07-15 08:03:26.250668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.638 [2024-07-15 08:03:26.251072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.638 [2024-07-15 08:03:26.251088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.638 [2024-07-15 08:03:26.251094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.638 [2024-07-15 08:03:26.251279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.638 [2024-07-15 08:03:26.251452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.638 [2024-07-15 08:03:26.251463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.638 [2024-07-15 08:03:26.251470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.638 [2024-07-15 08:03:26.254126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.638 [2024-07-15 08:03:26.263766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.638 [2024-07-15 08:03:26.264143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.638 [2024-07-15 08:03:26.264158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.638 [2024-07-15 08:03:26.264165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.638 [2024-07-15 08:03:26.264353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.638 [2024-07-15 08:03:26.264526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.638 [2024-07-15 08:03:26.264533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.638 [2024-07-15 08:03:26.264539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.638 [2024-07-15 08:03:26.267197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.638 [2024-07-15 08:03:26.276622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.638 [2024-07-15 08:03:26.277026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.638 [2024-07-15 08:03:26.277041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.638 [2024-07-15 08:03:26.277047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.638 [2024-07-15 08:03:26.277209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.638 [2024-07-15 08:03:26.277402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.638 [2024-07-15 08:03:26.277411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.638 [2024-07-15 08:03:26.277417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.638 [2024-07-15 08:03:26.280074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.638 [2024-07-15 08:03:26.289413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.638 [2024-07-15 08:03:26.289811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.638 [2024-07-15 08:03:26.289827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.638 [2024-07-15 08:03:26.289833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.638 [2024-07-15 08:03:26.289995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.638 [2024-07-15 08:03:26.290157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.638 [2024-07-15 08:03:26.290165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.638 [2024-07-15 08:03:26.290171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.638 [2024-07-15 08:03:26.292954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.638 [2024-07-15 08:03:26.302297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.638 [2024-07-15 08:03:26.302713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.638 [2024-07-15 08:03:26.302756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.638 [2024-07-15 08:03:26.302778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.638 [2024-07-15 08:03:26.303204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.638 [2024-07-15 08:03:26.303394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.638 [2024-07-15 08:03:26.303403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.638 [2024-07-15 08:03:26.303409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.638 [2024-07-15 08:03:26.306226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.638 [2024-07-15 08:03:26.315135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.638 [2024-07-15 08:03:26.315547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.638 [2024-07-15 08:03:26.315582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.638 [2024-07-15 08:03:26.315605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.638 [2024-07-15 08:03:26.316135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.638 [2024-07-15 08:03:26.316304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.638 [2024-07-15 08:03:26.316312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.638 [2024-07-15 08:03:26.316318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.638 [2024-07-15 08:03:26.318904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.638 [2024-07-15 08:03:26.327924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.638 [2024-07-15 08:03:26.328251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.638 [2024-07-15 08:03:26.328266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.638 [2024-07-15 08:03:26.328273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.638 [2024-07-15 08:03:26.328435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.638 [2024-07-15 08:03:26.328597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.638 [2024-07-15 08:03:26.328605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.638 [2024-07-15 08:03:26.328611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.638 [2024-07-15 08:03:26.331205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.638 [2024-07-15 08:03:26.340880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.638 [2024-07-15 08:03:26.341294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.638 [2024-07-15 08:03:26.341335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.638 [2024-07-15 08:03:26.341358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.638 [2024-07-15 08:03:26.341927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.638 [2024-07-15 08:03:26.342099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.638 [2024-07-15 08:03:26.342107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.638 [2024-07-15 08:03:26.342113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.638 [2024-07-15 08:03:26.344832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.638 [2024-07-15 08:03:26.353766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.638 [2024-07-15 08:03:26.354165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.638 [2024-07-15 08:03:26.354180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.638 [2024-07-15 08:03:26.354187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.638 [2024-07-15 08:03:26.354377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.638 [2024-07-15 08:03:26.354550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.638 [2024-07-15 08:03:26.354558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.638 [2024-07-15 08:03:26.354564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.639 [2024-07-15 08:03:26.357212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.639 [2024-07-15 08:03:26.366689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.639 [2024-07-15 08:03:26.367092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.639 [2024-07-15 08:03:26.367107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.639 [2024-07-15 08:03:26.367113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.639 [2024-07-15 08:03:26.367298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.639 [2024-07-15 08:03:26.367469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.639 [2024-07-15 08:03:26.367477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.639 [2024-07-15 08:03:26.367483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.639 [2024-07-15 08:03:26.370167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.639 [2024-07-15 08:03:26.379592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.639 [2024-07-15 08:03:26.379993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.639 [2024-07-15 08:03:26.380008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.639 [2024-07-15 08:03:26.380014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.639 [2024-07-15 08:03:26.380177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.639 [2024-07-15 08:03:26.380366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.639 [2024-07-15 08:03:26.380375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.639 [2024-07-15 08:03:26.380384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.639 [2024-07-15 08:03:26.383046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.899 [2024-07-15 08:03:26.392573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.899 [2024-07-15 08:03:26.392941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.899 [2024-07-15 08:03:26.392956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.899 [2024-07-15 08:03:26.392964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.899 [2024-07-15 08:03:26.393147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.899 [2024-07-15 08:03:26.393326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.899 [2024-07-15 08:03:26.393334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.899 [2024-07-15 08:03:26.393341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.899 [2024-07-15 08:03:26.396043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.899 [2024-07-15 08:03:26.405378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.899 [2024-07-15 08:03:26.405780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.899 [2024-07-15 08:03:26.405829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.899 [2024-07-15 08:03:26.405850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.899 [2024-07-15 08:03:26.406365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.899 [2024-07-15 08:03:26.406528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.899 [2024-07-15 08:03:26.406536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.899 [2024-07-15 08:03:26.406542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.899 [2024-07-15 08:03:26.409195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.899 [2024-07-15 08:03:26.418308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.899 [2024-07-15 08:03:26.418687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.899 [2024-07-15 08:03:26.418702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.899 [2024-07-15 08:03:26.418709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.899 [2024-07-15 08:03:26.418870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.899 [2024-07-15 08:03:26.419033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.899 [2024-07-15 08:03:26.419040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.899 [2024-07-15 08:03:26.419046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.899 [2024-07-15 08:03:26.421638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.899 [2024-07-15 08:03:26.431118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.899 [2024-07-15 08:03:26.431520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.899 [2024-07-15 08:03:26.431536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.899 [2024-07-15 08:03:26.431542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.899 [2024-07-15 08:03:26.431714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.899 [2024-07-15 08:03:26.431884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.899 [2024-07-15 08:03:26.431892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.899 [2024-07-15 08:03:26.431898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.899 [2024-07-15 08:03:26.434539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3407664 Killed "${NVMF_APP[@]}" "$@" 00:27:41.899 08:03:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:41.899 08:03:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:41.899 08:03:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:41.899 08:03:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:41.899 08:03:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:41.899 [2024-07-15 08:03:26.444203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.899 [2024-07-15 08:03:26.444602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.899 [2024-07-15 08:03:26.444618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.899 [2024-07-15 08:03:26.444625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.899 [2024-07-15 08:03:26.444801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.899 08:03:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3409068 00:27:41.899 [2024-07-15 08:03:26.444978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.899 [2024-07-15 08:03:26.444987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.899 [2024-07-15 08:03:26.444993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.899 08:03:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3409068 00:27:41.899 08:03:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:41.899 08:03:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3409068 ']' 00:27:41.899 08:03:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.899 08:03:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:41.900 08:03:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.900 08:03:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:41.900 08:03:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:41.900 [2024-07-15 08:03:26.447822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.900 [2024-07-15 08:03:26.457343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.900 [2024-07-15 08:03:26.457783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.900 [2024-07-15 08:03:26.457802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.900 [2024-07-15 08:03:26.457809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.900 [2024-07-15 08:03:26.457985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.900 [2024-07-15 08:03:26.458162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.900 [2024-07-15 08:03:26.458170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.900 [2024-07-15 08:03:26.458177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.900 [2024-07-15 08:03:26.461007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.900 [2024-07-15 08:03:26.470539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.900 [2024-07-15 08:03:26.470978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.900 [2024-07-15 08:03:26.470994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.900 [2024-07-15 08:03:26.471001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.900 [2024-07-15 08:03:26.471177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.900 [2024-07-15 08:03:26.471362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.900 [2024-07-15 08:03:26.471370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.900 [2024-07-15 08:03:26.471377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.900 [2024-07-15 08:03:26.474201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.900 [2024-07-15 08:03:26.483732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.900 [2024-07-15 08:03:26.484135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.900 [2024-07-15 08:03:26.484151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.900 [2024-07-15 08:03:26.484158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.900 [2024-07-15 08:03:26.484357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.900 [2024-07-15 08:03:26.484535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.900 [2024-07-15 08:03:26.484543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.900 [2024-07-15 08:03:26.484549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.900 [2024-07-15 08:03:26.487378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.900 [2024-07-15 08:03:26.490393] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:27:41.900 [2024-07-15 08:03:26.490430] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:41.900 [2024-07-15 08:03:26.496851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.900 [2024-07-15 08:03:26.497220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.900 [2024-07-15 08:03:26.497246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.900 [2024-07-15 08:03:26.497253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.900 [2024-07-15 08:03:26.497430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.900 [2024-07-15 08:03:26.497607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.900 [2024-07-15 08:03:26.497615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.900 [2024-07-15 08:03:26.497622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.900 [2024-07-15 08:03:26.500379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.900 [2024-07-15 08:03:26.509878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.900 [2024-07-15 08:03:26.510317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.900 [2024-07-15 08:03:26.510334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.900 [2024-07-15 08:03:26.510341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.900 [2024-07-15 08:03:26.510514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.900 [2024-07-15 08:03:26.510686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.900 [2024-07-15 08:03:26.510694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.900 [2024-07-15 08:03:26.510701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.900 [2024-07-15 08:03:26.513451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.900 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.900 [2024-07-15 08:03:26.522906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.900 [2024-07-15 08:03:26.523342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.900 [2024-07-15 08:03:26.523359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.900 [2024-07-15 08:03:26.523366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.900 [2024-07-15 08:03:26.523538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.900 [2024-07-15 08:03:26.523727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.900 [2024-07-15 08:03:26.523735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.900 [2024-07-15 08:03:26.523741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.900 [2024-07-15 08:03:26.526480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.900 [2024-07-15 08:03:26.535868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.900 [2024-07-15 08:03:26.536304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.900 [2024-07-15 08:03:26.536320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.900 [2024-07-15 08:03:26.536327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.900 [2024-07-15 08:03:26.536499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.900 [2024-07-15 08:03:26.536675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.900 [2024-07-15 08:03:26.536683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.900 [2024-07-15 08:03:26.536689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.900 [2024-07-15 08:03:26.539499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.900 [2024-07-15 08:03:26.548912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.900 [2024-07-15 08:03:26.549328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.901 [2024-07-15 08:03:26.549344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.901 [2024-07-15 08:03:26.549351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.901 [2024-07-15 08:03:26.549523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.901 [2024-07-15 08:03:26.549696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.901 [2024-07-15 08:03:26.549705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.901 [2024-07-15 08:03:26.549711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.901 [2024-07-15 08:03:26.552458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.901 [2024-07-15 08:03:26.562014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.901 [2024-07-15 08:03:26.562426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.901 [2024-07-15 08:03:26.562443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.901 [2024-07-15 08:03:26.562450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.901 [2024-07-15 08:03:26.562623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.901 [2024-07-15 08:03:26.562659] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:41.901 [2024-07-15 08:03:26.562795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.901 [2024-07-15 08:03:26.562804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.901 [2024-07-15 08:03:26.562811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.901 [2024-07-15 08:03:26.565563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.901 [2024-07-15 08:03:26.575113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.901 [2024-07-15 08:03:26.575492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.901 [2024-07-15 08:03:26.575509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.901 [2024-07-15 08:03:26.575517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.901 [2024-07-15 08:03:26.575690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.901 [2024-07-15 08:03:26.575862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.901 [2024-07-15 08:03:26.575871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.901 [2024-07-15 08:03:26.575885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.901 [2024-07-15 08:03:26.578631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.901 [2024-07-15 08:03:26.588188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.901 [2024-07-15 08:03:26.588611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.901 [2024-07-15 08:03:26.588628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.901 [2024-07-15 08:03:26.588635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.901 [2024-07-15 08:03:26.588806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.901 [2024-07-15 08:03:26.588979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.901 [2024-07-15 08:03:26.588988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.901 [2024-07-15 08:03:26.588994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.901 [2024-07-15 08:03:26.591735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.901 [2024-07-15 08:03:26.601142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.901 [2024-07-15 08:03:26.601602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.901 [2024-07-15 08:03:26.601619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.901 [2024-07-15 08:03:26.601626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.901 [2024-07-15 08:03:26.601798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.901 [2024-07-15 08:03:26.601971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.901 [2024-07-15 08:03:26.601979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.901 [2024-07-15 08:03:26.601986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.901 [2024-07-15 08:03:26.604729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.901 [2024-07-15 08:03:26.614126] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.901 [2024-07-15 08:03:26.614574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.901 [2024-07-15 08:03:26.614591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.901 [2024-07-15 08:03:26.614598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.901 [2024-07-15 08:03:26.614769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.901 [2024-07-15 08:03:26.614941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.901 [2024-07-15 08:03:26.614949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.901 [2024-07-15 08:03:26.614956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.901 [2024-07-15 08:03:26.617697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.901 [2024-07-15 08:03:26.627098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.901 [2024-07-15 08:03:26.627517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.901 [2024-07-15 08:03:26.627538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.901 [2024-07-15 08:03:26.627545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.901 [2024-07-15 08:03:26.627717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.901 [2024-07-15 08:03:26.627889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.901 [2024-07-15 08:03:26.627898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.901 [2024-07-15 08:03:26.627903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.901 [2024-07-15 08:03:26.630649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.901 [2024-07-15 08:03:26.636554] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:41.901 [2024-07-15 08:03:26.636581] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:41.901 [2024-07-15 08:03:26.636588] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:41.901 [2024-07-15 08:03:26.636607] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:41.901 [2024-07-15 08:03:26.636611] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:41.901 [2024-07-15 08:03:26.637180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.901 [2024-07-15 08:03:26.637096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:41.901 [2024-07-15 08:03:26.637182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:41.901 [2024-07-15 08:03:26.640172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.901 [2024-07-15 08:03:26.640605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.901 [2024-07-15 08:03:26.640623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:41.902 [2024-07-15 08:03:26.640631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:41.902 [2024-07-15 08:03:26.640810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:41.902 [2024-07-15 08:03:26.640989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.902 [2024-07-15 08:03:26.640997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.902 [2024-07-15 08:03:26.641004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.902 [2024-07-15 08:03:26.643836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.164 [2024-07-15 08:03:26.653363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.164 [2024-07-15 08:03:26.653773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.164 [2024-07-15 08:03:26.653792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.164 [2024-07-15 08:03:26.653800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.164 [2024-07-15 08:03:26.653977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.164 [2024-07-15 08:03:26.654155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.164 [2024-07-15 08:03:26.654163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.164 [2024-07-15 08:03:26.654176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.164 [2024-07-15 08:03:26.657007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.164 [2024-07-15 08:03:26.666532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.164 [2024-07-15 08:03:26.666996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.164 [2024-07-15 08:03:26.667016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.164 [2024-07-15 08:03:26.667027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.164 [2024-07-15 08:03:26.667205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.164 [2024-07-15 08:03:26.667391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.164 [2024-07-15 08:03:26.667400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.164 [2024-07-15 08:03:26.667408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.164 [2024-07-15 08:03:26.670234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.164 [2024-07-15 08:03:26.679577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.164 [2024-07-15 08:03:26.680033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.164 [2024-07-15 08:03:26.680051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.164 [2024-07-15 08:03:26.680058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.164 [2024-07-15 08:03:26.680241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.164 [2024-07-15 08:03:26.680420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.164 [2024-07-15 08:03:26.680428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.164 [2024-07-15 08:03:26.680436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.164 [2024-07-15 08:03:26.683272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.164 [2024-07-15 08:03:26.692617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.164 [2024-07-15 08:03:26.693052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.164 [2024-07-15 08:03:26.693070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.164 [2024-07-15 08:03:26.693078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.164 [2024-07-15 08:03:26.693260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.164 [2024-07-15 08:03:26.693439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.164 [2024-07-15 08:03:26.693447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.164 [2024-07-15 08:03:26.693454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.164 [2024-07-15 08:03:26.696276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.164 [2024-07-15 08:03:26.705803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.164 [2024-07-15 08:03:26.706233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.164 [2024-07-15 08:03:26.706256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.165 [2024-07-15 08:03:26.706263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.165 [2024-07-15 08:03:26.706440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.165 [2024-07-15 08:03:26.706618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.165 [2024-07-15 08:03:26.706626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.165 [2024-07-15 08:03:26.706633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.165 [2024-07-15 08:03:26.709457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.165 [2024-07-15 08:03:26.718960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.165 [2024-07-15 08:03:26.719376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.165 [2024-07-15 08:03:26.719392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.165 [2024-07-15 08:03:26.719399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.165 [2024-07-15 08:03:26.719576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.165 [2024-07-15 08:03:26.719754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.165 [2024-07-15 08:03:26.719762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.165 [2024-07-15 08:03:26.719769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.165 [2024-07-15 08:03:26.722594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.165 [2024-07-15 08:03:26.732111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.165 [2024-07-15 08:03:26.732530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.165 [2024-07-15 08:03:26.732546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.165 [2024-07-15 08:03:26.732554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.165 [2024-07-15 08:03:26.732731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.165 [2024-07-15 08:03:26.732908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.165 [2024-07-15 08:03:26.732916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.165 [2024-07-15 08:03:26.732923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.165 [2024-07-15 08:03:26.735775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.165 [2024-07-15 08:03:26.745299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.165 [2024-07-15 08:03:26.745664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.165 [2024-07-15 08:03:26.745680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.165 [2024-07-15 08:03:26.745687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.165 [2024-07-15 08:03:26.745864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.165 [2024-07-15 08:03:26.746043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.165 [2024-07-15 08:03:26.746051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.165 [2024-07-15 08:03:26.746058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.165 [2024-07-15 08:03:26.748886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.165 [2024-07-15 08:03:26.758392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.165 [2024-07-15 08:03:26.758808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.165 [2024-07-15 08:03:26.758823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.165 [2024-07-15 08:03:26.758830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.165 [2024-07-15 08:03:26.759007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.165 [2024-07-15 08:03:26.759184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.165 [2024-07-15 08:03:26.759191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.165 [2024-07-15 08:03:26.759199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.165 [2024-07-15 08:03:26.762025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.165 [2024-07-15 08:03:26.771531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.165 [2024-07-15 08:03:26.771945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.165 [2024-07-15 08:03:26.771961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.165 [2024-07-15 08:03:26.771968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.165 [2024-07-15 08:03:26.772145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.165 [2024-07-15 08:03:26.772326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.165 [2024-07-15 08:03:26.772334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.165 [2024-07-15 08:03:26.772341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.165 [2024-07-15 08:03:26.775161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.165 [2024-07-15 08:03:26.784673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.165 [2024-07-15 08:03:26.785088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.165 [2024-07-15 08:03:26.785104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.165 [2024-07-15 08:03:26.785110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.165 [2024-07-15 08:03:26.785292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.165 [2024-07-15 08:03:26.785469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.165 [2024-07-15 08:03:26.785477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.165 [2024-07-15 08:03:26.785483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.165 [2024-07-15 08:03:26.788309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.165 [2024-07-15 08:03:26.797810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.165 [2024-07-15 08:03:26.798221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.165 [2024-07-15 08:03:26.798242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.165 [2024-07-15 08:03:26.798249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.165 [2024-07-15 08:03:26.798425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.165 [2024-07-15 08:03:26.798602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.165 [2024-07-15 08:03:26.798610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.165 [2024-07-15 08:03:26.798617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.165 [2024-07-15 08:03:26.801460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.165 [2024-07-15 08:03:26.810960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.165 [2024-07-15 08:03:26.811373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.165 [2024-07-15 08:03:26.811390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.165 [2024-07-15 08:03:26.811397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.166 [2024-07-15 08:03:26.811573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.166 [2024-07-15 08:03:26.811750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.166 [2024-07-15 08:03:26.811758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.166 [2024-07-15 08:03:26.811765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.166 [2024-07-15 08:03:26.814588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.166 [2024-07-15 08:03:26.824108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.166 [2024-07-15 08:03:26.824546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.166 [2024-07-15 08:03:26.824564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.166 [2024-07-15 08:03:26.824571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.166 [2024-07-15 08:03:26.824748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.166 [2024-07-15 08:03:26.824924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.166 [2024-07-15 08:03:26.824932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.166 [2024-07-15 08:03:26.824939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.166 [2024-07-15 08:03:26.827762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.166 [2024-07-15 08:03:26.837271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.166 [2024-07-15 08:03:26.837697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.166 [2024-07-15 08:03:26.837713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.166 [2024-07-15 08:03:26.837723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.166 [2024-07-15 08:03:26.837900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.166 [2024-07-15 08:03:26.838076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.166 [2024-07-15 08:03:26.838084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.166 [2024-07-15 08:03:26.838091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.166 [2024-07-15 08:03:26.840922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.166 [2024-07-15 08:03:26.850440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.166 [2024-07-15 08:03:26.850830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.166 [2024-07-15 08:03:26.850846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.166 [2024-07-15 08:03:26.850853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.166 [2024-07-15 08:03:26.851029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.166 [2024-07-15 08:03:26.851207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.166 [2024-07-15 08:03:26.851215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.166 [2024-07-15 08:03:26.851222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.166 [2024-07-15 08:03:26.854043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.166 [2024-07-15 08:03:26.863554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.166 [2024-07-15 08:03:26.863973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.166 [2024-07-15 08:03:26.863989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.166 [2024-07-15 08:03:26.863996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.166 [2024-07-15 08:03:26.864172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.166 [2024-07-15 08:03:26.864353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.166 [2024-07-15 08:03:26.864361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.166 [2024-07-15 08:03:26.864367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.166 [2024-07-15 08:03:26.867186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.166 [2024-07-15 08:03:26.876686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.166 [2024-07-15 08:03:26.877105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.166 [2024-07-15 08:03:26.877121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.166 [2024-07-15 08:03:26.877128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.166 [2024-07-15 08:03:26.877308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.166 [2024-07-15 08:03:26.877486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.166 [2024-07-15 08:03:26.877497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.166 [2024-07-15 08:03:26.877503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.166 [2024-07-15 08:03:26.880328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.166 [2024-07-15 08:03:26.889835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.166 [2024-07-15 08:03:26.890233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.166 [2024-07-15 08:03:26.890249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.166 [2024-07-15 08:03:26.890256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.166 [2024-07-15 08:03:26.890433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.166 [2024-07-15 08:03:26.890610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.166 [2024-07-15 08:03:26.890618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.166 [2024-07-15 08:03:26.890625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.166 [2024-07-15 08:03:26.893466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.166 [2024-07-15 08:03:26.902968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.166 [2024-07-15 08:03:26.903361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.166 [2024-07-15 08:03:26.903377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.166 [2024-07-15 08:03:26.903384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.166 [2024-07-15 08:03:26.903561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.166 [2024-07-15 08:03:26.903737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.166 [2024-07-15 08:03:26.903746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.166 [2024-07-15 08:03:26.903752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.166 [2024-07-15 08:03:26.906574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.457 [2024-07-15 08:03:26.916155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.457 [2024-07-15 08:03:26.916537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.457 [2024-07-15 08:03:26.916554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.457 [2024-07-15 08:03:26.916561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.457 [2024-07-15 08:03:26.916738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.457 [2024-07-15 08:03:26.916915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.457 [2024-07-15 08:03:26.916923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.457 [2024-07-15 08:03:26.916930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.457 [2024-07-15 08:03:26.919761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.457 [2024-07-15 08:03:26.929287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.457 [2024-07-15 08:03:26.929635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.457 [2024-07-15 08:03:26.929652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.457 [2024-07-15 08:03:26.929659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.457 [2024-07-15 08:03:26.929836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.457 [2024-07-15 08:03:26.930012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.457 [2024-07-15 08:03:26.930020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.457 [2024-07-15 08:03:26.930027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.457 [2024-07-15 08:03:26.932853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.457 [2024-07-15 08:03:26.942372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.457 [2024-07-15 08:03:26.942740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.458 [2024-07-15 08:03:26.942756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.458 [2024-07-15 08:03:26.942764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.458 [2024-07-15 08:03:26.942943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.458 [2024-07-15 08:03:26.943122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.458 [2024-07-15 08:03:26.943131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.458 [2024-07-15 08:03:26.943138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.458 [2024-07-15 08:03:26.945969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.458 [2024-07-15 08:03:26.955484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.458 [2024-07-15 08:03:26.955855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.458 [2024-07-15 08:03:26.955870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.458 [2024-07-15 08:03:26.955877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.458 [2024-07-15 08:03:26.956055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.458 [2024-07-15 08:03:26.956237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.458 [2024-07-15 08:03:26.956246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.458 [2024-07-15 08:03:26.956253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.458 [2024-07-15 08:03:26.959072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.458 [2024-07-15 08:03:26.968578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.458 [2024-07-15 08:03:26.968995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.458 [2024-07-15 08:03:26.969011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.458 [2024-07-15 08:03:26.969018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.458 [2024-07-15 08:03:26.969198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.458 [2024-07-15 08:03:26.969379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.458 [2024-07-15 08:03:26.969388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.458 [2024-07-15 08:03:26.969394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.458 [2024-07-15 08:03:26.972214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.458 [2024-07-15 08:03:26.981727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.458 [2024-07-15 08:03:26.982143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.458 [2024-07-15 08:03:26.982159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.458 [2024-07-15 08:03:26.982166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.458 [2024-07-15 08:03:26.982346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.458 [2024-07-15 08:03:26.982524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.458 [2024-07-15 08:03:26.982532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.458 [2024-07-15 08:03:26.982539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.458 [2024-07-15 08:03:26.985362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.458 [2024-07-15 08:03:26.994866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.458 [2024-07-15 08:03:26.995275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.458 [2024-07-15 08:03:26.995292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.458 [2024-07-15 08:03:26.995298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.458 [2024-07-15 08:03:26.995474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.458 [2024-07-15 08:03:26.995651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.458 [2024-07-15 08:03:26.995660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.458 [2024-07-15 08:03:26.995666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.458 [2024-07-15 08:03:26.998492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.458 [2024-07-15 08:03:27.008007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.458 [2024-07-15 08:03:27.008318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.458 [2024-07-15 08:03:27.008335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.458 [2024-07-15 08:03:27.008342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.458 [2024-07-15 08:03:27.008518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.458 [2024-07-15 08:03:27.008696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.458 [2024-07-15 08:03:27.008704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.458 [2024-07-15 08:03:27.008714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.458 [2024-07-15 08:03:27.011540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.458 [2024-07-15 08:03:27.021053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.458 [2024-07-15 08:03:27.021501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.458 [2024-07-15 08:03:27.021517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.458 [2024-07-15 08:03:27.021524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.458 [2024-07-15 08:03:27.021701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.458 [2024-07-15 08:03:27.021878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.458 [2024-07-15 08:03:27.021886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.458 [2024-07-15 08:03:27.021893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.458 [2024-07-15 08:03:27.024721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.458 [2024-07-15 08:03:27.034232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.458 [2024-07-15 08:03:27.034674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.458 [2024-07-15 08:03:27.034690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.458 [2024-07-15 08:03:27.034697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.458 [2024-07-15 08:03:27.034873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.458 [2024-07-15 08:03:27.035050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.458 [2024-07-15 08:03:27.035058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.458 [2024-07-15 08:03:27.035064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.458 [2024-07-15 08:03:27.037889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.458 [2024-07-15 08:03:27.047399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.458 [2024-07-15 08:03:27.047840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.458 [2024-07-15 08:03:27.047856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.458 [2024-07-15 08:03:27.047864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.458 [2024-07-15 08:03:27.048041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.458 [2024-07-15 08:03:27.048218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.458 [2024-07-15 08:03:27.048231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.458 [2024-07-15 08:03:27.048238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.458 [2024-07-15 08:03:27.051067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.458 [2024-07-15 08:03:27.060581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.458 [2024-07-15 08:03:27.061025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.458 [2024-07-15 08:03:27.061041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.458 [2024-07-15 08:03:27.061048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.458 [2024-07-15 08:03:27.061229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.458 [2024-07-15 08:03:27.061406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.458 [2024-07-15 08:03:27.061414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.458 [2024-07-15 08:03:27.061421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.458 [2024-07-15 08:03:27.064241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.458 [2024-07-15 08:03:27.073738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.458 [2024-07-15 08:03:27.074155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.458 [2024-07-15 08:03:27.074171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.458 [2024-07-15 08:03:27.074178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.458 [2024-07-15 08:03:27.074357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.458 [2024-07-15 08:03:27.074534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.458 [2024-07-15 08:03:27.074542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.458 [2024-07-15 08:03:27.074549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.458 [2024-07-15 08:03:27.077373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.459 [2024-07-15 08:03:27.086876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.459 [2024-07-15 08:03:27.087291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.459 [2024-07-15 08:03:27.087308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.459 [2024-07-15 08:03:27.087315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.459 [2024-07-15 08:03:27.087492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.459 [2024-07-15 08:03:27.087669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.459 [2024-07-15 08:03:27.087677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.459 [2024-07-15 08:03:27.087683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.459 [2024-07-15 08:03:27.090504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.459 [2024-07-15 08:03:27.099999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.459 [2024-07-15 08:03:27.100436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.459 [2024-07-15 08:03:27.100452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.459 [2024-07-15 08:03:27.100460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.459 [2024-07-15 08:03:27.100636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.459 [2024-07-15 08:03:27.100817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.459 [2024-07-15 08:03:27.100825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.459 [2024-07-15 08:03:27.100831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.459 [2024-07-15 08:03:27.103654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.459 [2024-07-15 08:03:27.113165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.459 [2024-07-15 08:03:27.113611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.459 [2024-07-15 08:03:27.113628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.459 [2024-07-15 08:03:27.113634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.459 [2024-07-15 08:03:27.113811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.459 [2024-07-15 08:03:27.113988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.459 [2024-07-15 08:03:27.113997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.459 [2024-07-15 08:03:27.114003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.459 [2024-07-15 08:03:27.116827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.459 [2024-07-15 08:03:27.126334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.459 [2024-07-15 08:03:27.126774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.459 [2024-07-15 08:03:27.126790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.459 [2024-07-15 08:03:27.126797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.459 [2024-07-15 08:03:27.126973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.459 [2024-07-15 08:03:27.127149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.459 [2024-07-15 08:03:27.127157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.459 [2024-07-15 08:03:27.127163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.459 [2024-07-15 08:03:27.129987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.459 [2024-07-15 08:03:27.139494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.459 [2024-07-15 08:03:27.139920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.459 [2024-07-15 08:03:27.139936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.459 [2024-07-15 08:03:27.139944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.459 [2024-07-15 08:03:27.140120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.459 [2024-07-15 08:03:27.140302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.459 [2024-07-15 08:03:27.140310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.459 [2024-07-15 08:03:27.140316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.459 [2024-07-15 08:03:27.143147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.459 [2024-07-15 08:03:27.152667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.459 [2024-07-15 08:03:27.153041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.459 [2024-07-15 08:03:27.153057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.459 [2024-07-15 08:03:27.153064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.459 [2024-07-15 08:03:27.153246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.459 [2024-07-15 08:03:27.153424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.459 [2024-07-15 08:03:27.153432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.459 [2024-07-15 08:03:27.153438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.459 [2024-07-15 08:03:27.156261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.459 [2024-07-15 08:03:27.165768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.459 [2024-07-15 08:03:27.166207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.459 [2024-07-15 08:03:27.166222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.459 [2024-07-15 08:03:27.166233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.459 [2024-07-15 08:03:27.166411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.459 [2024-07-15 08:03:27.166588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.459 [2024-07-15 08:03:27.166596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.459 [2024-07-15 08:03:27.166602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.459 [2024-07-15 08:03:27.169425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.459 [2024-07-15 08:03:27.178943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.459 [2024-07-15 08:03:27.179364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.459 [2024-07-15 08:03:27.179379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.459 [2024-07-15 08:03:27.179386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.459 [2024-07-15 08:03:27.179562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.459 [2024-07-15 08:03:27.179740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.459 [2024-07-15 08:03:27.179748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.459 [2024-07-15 08:03:27.179754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.459 [2024-07-15 08:03:27.182595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.459 [2024-07-15 08:03:27.192115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.459 [2024-07-15 08:03:27.192482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.459 [2024-07-15 08:03:27.192501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.459 [2024-07-15 08:03:27.192509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.459 [2024-07-15 08:03:27.192685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.459 [2024-07-15 08:03:27.192863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.459 [2024-07-15 08:03:27.192871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.459 [2024-07-15 08:03:27.192877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.459 [2024-07-15 08:03:27.195725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.459 [2024-07-15 08:03:27.205244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.459 [2024-07-15 08:03:27.205597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.459 [2024-07-15 08:03:27.205613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.459 [2024-07-15 08:03:27.205620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.459 [2024-07-15 08:03:27.205796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.459 [2024-07-15 08:03:27.205973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.459 [2024-07-15 08:03:27.205982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.459 [2024-07-15 08:03:27.205988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.719 [2024-07-15 08:03:27.208813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.719 [2024-07-15 08:03:27.218321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.719 [2024-07-15 08:03:27.218634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.719 [2024-07-15 08:03:27.218650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.719 [2024-07-15 08:03:27.218658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.719 [2024-07-15 08:03:27.218836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.719 [2024-07-15 08:03:27.219013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.719 [2024-07-15 08:03:27.219022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.719 [2024-07-15 08:03:27.219029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.719 [2024-07-15 08:03:27.221862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.719 [2024-07-15 08:03:27.231381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.719 [2024-07-15 08:03:27.231801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.719 [2024-07-15 08:03:27.231817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.719 [2024-07-15 08:03:27.231824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.719 [2024-07-15 08:03:27.232001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.719 [2024-07-15 08:03:27.232393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.719 [2024-07-15 08:03:27.232404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.719 [2024-07-15 08:03:27.232411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.719 [2024-07-15 08:03:27.235240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.719 [2024-07-15 08:03:27.244425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.719 [2024-07-15 08:03:27.244765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.719 [2024-07-15 08:03:27.244783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.719 [2024-07-15 08:03:27.244790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.719 [2024-07-15 08:03:27.244966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.719 [2024-07-15 08:03:27.245144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.719 [2024-07-15 08:03:27.245154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.719 [2024-07-15 08:03:27.245161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.719 [2024-07-15 08:03:27.247996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.719 [2024-07-15 08:03:27.257522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.719 [2024-07-15 08:03:27.257943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.719 [2024-07-15 08:03:27.257961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.719 [2024-07-15 08:03:27.257970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.719 [2024-07-15 08:03:27.258147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.719 [2024-07-15 08:03:27.258331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.719 [2024-07-15 08:03:27.258341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.719 [2024-07-15 08:03:27.258348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.719 [2024-07-15 08:03:27.261169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.719 [2024-07-15 08:03:27.270662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.719 [2024-07-15 08:03:27.271098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.719 [2024-07-15 08:03:27.271113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.719 [2024-07-15 08:03:27.271121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.719 [2024-07-15 08:03:27.271302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.719 [2024-07-15 08:03:27.271479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.719 [2024-07-15 08:03:27.271488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.719 [2024-07-15 08:03:27.271494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.719 [2024-07-15 08:03:27.274319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.719 [2024-07-15 08:03:27.283848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.719 [2024-07-15 08:03:27.284230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.719 [2024-07-15 08:03:27.284246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.719 [2024-07-15 08:03:27.284253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.719 [2024-07-15 08:03:27.284434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.719 [2024-07-15 08:03:27.284617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.719 [2024-07-15 08:03:27.284625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.719 [2024-07-15 08:03:27.284632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.719 [2024-07-15 08:03:27.287456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.719 [2024-07-15 08:03:27.296966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.719 [2024-07-15 08:03:27.297336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.719 [2024-07-15 08:03:27.297353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.719 [2024-07-15 08:03:27.297360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.719 [2024-07-15 08:03:27.297536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.719 [2024-07-15 08:03:27.297714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.719 [2024-07-15 08:03:27.297722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.719 [2024-07-15 08:03:27.297728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.719 [2024-07-15 08:03:27.300555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.719 08:03:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:42.719 08:03:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:27:42.719 08:03:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:42.719 08:03:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:42.719 08:03:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:42.719 [2024-07-15 08:03:27.310081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.719 [2024-07-15 08:03:27.310454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.719 [2024-07-15 08:03:27.310471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.719 [2024-07-15 08:03:27.310477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.719 [2024-07-15 08:03:27.310655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.719 [2024-07-15 08:03:27.310832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.719 [2024-07-15 08:03:27.310841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.719 [2024-07-15 08:03:27.310848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.719 [2024-07-15 08:03:27.313677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.720 [2024-07-15 08:03:27.323197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.720 [2024-07-15 08:03:27.323499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.720 [2024-07-15 08:03:27.323515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.720 [2024-07-15 08:03:27.323522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.720 [2024-07-15 08:03:27.323699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.720 [2024-07-15 08:03:27.323877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.720 [2024-07-15 08:03:27.323885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.720 [2024-07-15 08:03:27.323891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.720 [2024-07-15 08:03:27.326722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.720 [2024-07-15 08:03:27.336249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.720 [2024-07-15 08:03:27.336546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.720 [2024-07-15 08:03:27.336563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.720 [2024-07-15 08:03:27.336570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.720 [2024-07-15 08:03:27.336747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.720 [2024-07-15 08:03:27.336926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.720 [2024-07-15 08:03:27.336934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.720 [2024-07-15 08:03:27.336941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.720 [2024-07-15 08:03:27.339768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.720 08:03:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:42.720 08:03:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:42.720 08:03:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.720 08:03:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:42.720 [2024-07-15 08:03:27.345691] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.720 [2024-07-15 08:03:27.349295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.720 [2024-07-15 08:03:27.349592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.720 [2024-07-15 08:03:27.349608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.720 [2024-07-15 08:03:27.349615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.720 [2024-07-15 08:03:27.349792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.720 [2024-07-15 08:03:27.349969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.720 [2024-07-15 08:03:27.349977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.720 [2024-07-15 08:03:27.349983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.720 08:03:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.720 08:03:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:42.720 08:03:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.720 08:03:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:42.720 [2024-07-15 08:03:27.352813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.720 [2024-07-15 08:03:27.362342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.720 [2024-07-15 08:03:27.362731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.720 [2024-07-15 08:03:27.362747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.720 [2024-07-15 08:03:27.362754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.720 [2024-07-15 08:03:27.362931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.720 [2024-07-15 08:03:27.363109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.720 [2024-07-15 08:03:27.363117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.720 [2024-07-15 08:03:27.363123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.720 [2024-07-15 08:03:27.365954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.720 [2024-07-15 08:03:27.375488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.720 [2024-07-15 08:03:27.375813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.720 [2024-07-15 08:03:27.375830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.720 [2024-07-15 08:03:27.375837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.720 [2024-07-15 08:03:27.376015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.720 [2024-07-15 08:03:27.376191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.720 [2024-07-15 08:03:27.376199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.720 [2024-07-15 08:03:27.376206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.720 [2024-07-15 08:03:27.379038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.720 Malloc0 00:27:42.720 08:03:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.720 08:03:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:42.720 08:03:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.720 08:03:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:42.720 [2024-07-15 08:03:27.388569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.720 [2024-07-15 08:03:27.388914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.720 [2024-07-15 08:03:27.388930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.720 [2024-07-15 08:03:27.388937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.720 [2024-07-15 08:03:27.389114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.720 [2024-07-15 08:03:27.389297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.720 [2024-07-15 08:03:27.389309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.720 [2024-07-15 08:03:27.389316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.720 [2024-07-15 08:03:27.392138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.720 08:03:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.720 08:03:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:42.720 08:03:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.720 08:03:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:42.720 [2024-07-15 08:03:27.401665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.720 [2024-07-15 08:03:27.402057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.720 [2024-07-15 08:03:27.402072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170f980 with addr=10.0.0.2, port=4420 00:27:42.720 [2024-07-15 08:03:27.402079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f980 is same with the state(5) to be set 00:27:42.720 [2024-07-15 08:03:27.402261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170f980 (9): Bad file descriptor 00:27:42.720 [2024-07-15 08:03:27.402439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:42.720 [2024-07-15 08:03:27.402447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:42.720 [2024-07-15 08:03:27.402454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:42.720 08:03:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.720 08:03:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:42.720 08:03:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.720 08:03:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:42.720 [2024-07-15 08:03:27.405281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:42.720 [2024-07-15 08:03:27.406175] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.720 08:03:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.720 08:03:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3408103 00:27:42.720 [2024-07-15 08:03:27.414802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:42.720 [2024-07-15 08:03:27.449131] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:52.691 00:27:52.691 Latency(us) 00:27:52.691 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.691 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:52.691 Verification LBA range: start 0x0 length 0x4000 00:27:52.691 Nvme1n1 : 15.01 8058.84 31.48 12752.60 0.00 6130.96 658.92 13905.03 00:27:52.691 =================================================================================================================== 00:27:52.691 Total : 8058.84 31.48 12752.60 0.00 6130.96 658.92 13905.03 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:52.691 rmmod nvme_tcp 00:27:52.691 rmmod nvme_fabrics 00:27:52.691 rmmod nvme_keyring 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3409068 ']' 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3409068 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 3409068 ']' 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 3409068 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3409068 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3409068' 00:27:52.691 killing process with pid 3409068 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 3409068 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 3409068 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:52.691 08:03:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.067 08:03:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:54.067 00:27:54.067 real 0m26.560s 00:27:54.067 user 1m3.377s 00:27:54.067 sys 0m6.464s 00:27:54.067 08:03:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:54.067 08:03:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:54.067 ************************************ 00:27:54.067 END TEST nvmf_bdevperf 00:27:54.067 ************************************ 00:27:54.067 08:03:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:54.067 08:03:38 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:54.067 08:03:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:54.067 08:03:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:54.067 08:03:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:54.067 ************************************ 00:27:54.067 START TEST nvmf_target_disconnect 00:27:54.067 ************************************ 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:54.067 * Looking for test storage... 00:27:54.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:27:54.067 08:03:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:00.634 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:00.634 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:28:00.634 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:00.634 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:00.634 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:00.634 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:00.634 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:00.634 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:28:00.634 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:00.634 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:28:00.634 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:28:00.634 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:28:00.634 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:28:00.634 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:28:00.634 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:28:00.634 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.634 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.634 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.634 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.634 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.634 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.634 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:00.635 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:00.635 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:00.635 Found net devices under 0000:86:00.0: cvl_0_0 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:00.635 Found net devices under 0000:86:00.1: cvl_0_1 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:00.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:00.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:28:00.635 00:28:00.635 --- 10.0.0.2 ping statistics --- 00:28:00.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.635 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:00.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:00.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:28:00.635 00:28:00.635 --- 10.0.0.1 ping statistics --- 00:28:00.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.635 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:00.635 ************************************ 00:28:00.635 START TEST nvmf_target_disconnect_tc1 00:28:00.635 ************************************ 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:00.635 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:00.635 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.635 [2024-07-15 08:03:44.628635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.635 [2024-07-15 08:03:44.628674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd7e60 with addr=10.0.0.2, port=4420 00:28:00.635 [2024-07-15 08:03:44.628696] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:00.635 [2024-07-15 08:03:44.628709] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:00.635 [2024-07-15 08:03:44.628715] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:28:00.636 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:00.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:00.636 Initializing NVMe Controllers 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:00.636 00:28:00.636 real 0m0.113s 00:28:00.636 user 0m0.053s 00:28:00.636 sys 0m0.060s 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:00.636 ************************************ 00:28:00.636 END TEST nvmf_target_disconnect_tc1 00:28:00.636 ************************************ 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:00.636 ************************************ 00:28:00.636 START TEST nvmf_target_disconnect_tc2 00:28:00.636 ************************************ 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3414117 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3414117 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3414117 ']' 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:00.636 08:03:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.636 [2024-07-15 08:03:44.765054] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:28:00.636 [2024-07-15 08:03:44.765092] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.636 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.636 [2024-07-15 08:03:44.834082] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:00.636 [2024-07-15 08:03:44.912808] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:00.636 [2024-07-15 08:03:44.912845] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:00.636 [2024-07-15 08:03:44.912853] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:00.636 [2024-07-15 08:03:44.912858] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:00.636 [2024-07-15 08:03:44.912863] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:00.636 [2024-07-15 08:03:44.912972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:28:00.636 [2024-07-15 08:03:44.913079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:28:00.636 [2024-07-15 08:03:44.913184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:00.636 [2024-07-15 08:03:44.913185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:28:00.896 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:00.896 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:00.896 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:00.896 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:00.896 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.896 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:00.896 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:00.896 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.896 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.896 Malloc0 00:28:00.896 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.896 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:00.896 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.896 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.896 [2024-07-15 08:03:45.639174] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:00.896 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.896 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:00.896 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.896 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.155 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.155 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:01.155 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.155 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.155 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.155 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:01.155 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.155 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.155 [2024-07-15 08:03:45.668187] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:01.155 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.155 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:01.155 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.155 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.155 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.155 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3414262 00:28:01.155 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:01.155 08:03:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:01.155 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.066 08:03:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3414117 00:28:03.066 08:03:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Write completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Write completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Write completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Write completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Write completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Write completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Write completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Write completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Write completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Write completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Write completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Write completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Write completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Write completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Write completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 [2024-07-15 08:03:47.695394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Read completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Write completed with error (sct=0, sc=8) 00:28:03.066 starting I/O failed 00:28:03.066 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 [2024-07-15 08:03:47.695586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 [2024-07-15 08:03:47.695786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Write completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 Read completed with error (sct=0, sc=8) 00:28:03.067 starting I/O failed 00:28:03.067 [2024-07-15 08:03:47.695978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:03.067 [2024-07-15 08:03:47.696111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.067 [2024-07-15 08:03:47.696131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.067 qpair failed and we were unable to recover it. 00:28:03.067 [2024-07-15 08:03:47.696276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.067 [2024-07-15 08:03:47.696308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.067 qpair failed and we were unable to recover it. 00:28:03.067 [2024-07-15 08:03:47.696536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.067 [2024-07-15 08:03:47.696566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.067 qpair failed and we were unable to recover it. 00:28:03.067 [2024-07-15 08:03:47.696697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.067 [2024-07-15 08:03:47.696726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.067 qpair failed and we were unable to recover it. 00:28:03.067 [2024-07-15 08:03:47.696877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.067 [2024-07-15 08:03:47.696908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.067 qpair failed and we were unable to recover it. 00:28:03.067 [2024-07-15 08:03:47.697054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.067 [2024-07-15 08:03:47.697083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.067 qpair failed and we were unable to recover it. 00:28:03.067 [2024-07-15 08:03:47.697244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.067 [2024-07-15 08:03:47.697277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.067 qpair failed and we were unable to recover it. 00:28:03.067 [2024-07-15 08:03:47.698396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.067 [2024-07-15 08:03:47.698434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.067 qpair failed and we were unable to recover it. 00:28:03.067 [2024-07-15 08:03:47.698570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.067 [2024-07-15 08:03:47.698592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.067 qpair failed and we were unable to recover it. 00:28:03.067 [2024-07-15 08:03:47.698847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.067 [2024-07-15 08:03:47.698868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.067 qpair failed and we were unable to recover it. 00:28:03.067 [2024-07-15 08:03:47.699043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.067 [2024-07-15 08:03:47.699064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.067 qpair failed and we were unable to recover it. 00:28:03.067 [2024-07-15 08:03:47.699242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.067 [2024-07-15 08:03:47.699274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.067 qpair failed and we were unable to recover it. 00:28:03.067 [2024-07-15 08:03:47.699403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.699435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.699626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.699657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.699793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.699813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.699911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.699931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.700034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.700053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.700274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.700345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.700506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.700541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.701744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.701772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.701967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.701984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.702120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.702152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.702426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.702458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.702586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.702618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.702740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.702756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.702949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.702964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.703072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.703088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.703198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.703214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.703306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.703320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.703473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.703489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.703585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.703599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.703711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.703727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.703810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.703824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.703982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.703998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.704162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.704177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.704324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.704340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.704425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.704439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.704586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.704602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.704704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.704719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.704806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.704820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.704918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.704932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.705015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.705030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.705191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.705206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.705306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.705321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.705408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.705425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.705590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.705606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.705697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.705711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.705802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.705817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.705910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.705924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.706022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.706036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.706126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.706140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.706235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.706251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.068 qpair failed and we were unable to recover it. 00:28:03.068 [2024-07-15 08:03:47.706345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.068 [2024-07-15 08:03:47.706359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.706447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.706462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.706615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.706630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.706703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.706717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.706926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.706945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.707041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.707054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.707136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.707150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.707238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.707253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.707410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.707446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.707570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.707599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.707803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.707834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.708053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.708084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.708212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.708253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.708432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.708463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.708564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.708577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.708676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.708691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.708774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.708787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.708861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.708875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.709089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.709121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.709237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.709269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.709454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.709485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.709668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.709683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.709784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.709815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.710086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.710118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.710339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.710372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.710559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.710575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.710753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.710769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.710928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.710943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.711107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.711123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.711280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.711296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.711440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.711455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.711532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.711546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.712302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.712328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.712526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.712566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.712769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.712801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.712940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.712972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.713085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.713116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.713250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.713282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.713464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.713495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.713712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.713727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.069 [2024-07-15 08:03:47.713887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.069 [2024-07-15 08:03:47.713918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.069 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.714050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.714081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.714201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.714243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.714376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.714408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.714549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.714565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.714786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.714818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.715024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.715055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.715196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.715240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.715454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.715470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.715622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.715652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.715771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.715802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.716000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.716031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.716158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.716197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.717135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.717165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.717339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.717357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.717580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.717612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.717725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.717756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.717887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.717918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.718042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.718077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.718329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.718362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.718593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.718631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.718859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.718890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.719101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.719133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.719336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.719351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.720507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.720534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.720644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.720660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.720872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.720888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.721100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.721115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.721198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.721212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.721327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.721344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.721503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.721518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.721739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.721770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.721892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.721922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.722063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.722094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.722289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.722306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.722454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.722469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.722653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.722693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.722888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.070 [2024-07-15 08:03:47.722919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.070 qpair failed and we were unable to recover it. 00:28:03.070 [2024-07-15 08:03:47.723042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.723073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.723346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.723377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.723567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.723598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.723822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.723853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.723987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.724017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.724169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.724199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.724404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.724436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.724565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.724595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.724787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.724802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.724899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.724944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.725079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.725111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.725294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.725328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.725539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.725571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.725756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.725786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.725914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.725944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.726147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.726178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.726447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.726478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.727653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.727679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.727940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.727957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.728113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.728130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.728273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.728299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.728449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.728465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.728559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.728573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.728733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.728751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.728862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.728893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.729091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.729121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.729364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.729380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.729473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.729512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.729652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.729683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.729807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.729838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.730016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.730048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.730238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.730270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.730465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.730496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.730679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.071 [2024-07-15 08:03:47.730709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.071 qpair failed and we were unable to recover it. 00:28:03.071 [2024-07-15 08:03:47.730829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.730860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.730969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.731000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.731122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.731153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.731359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.731375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.731542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.731573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.731692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.731723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.731865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.731896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.732008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.732039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.732204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.732252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.732432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.732462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.732639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.732670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.732767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.732781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.732858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.732872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.732973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.733004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.733181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.733212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.733394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.733427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.733548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.733566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.733676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.733691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.733795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.733810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.733953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.733968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.734041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.734055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.734208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.734231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.734319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.734332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.734486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.734501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.734647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.734677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.734804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.734835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.735026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.735057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.735186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.735223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.735302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.735316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.735476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.735490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.735583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.735597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.735697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.735711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.735810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.735823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.735911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.735925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.736083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.736114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.736239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.736272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.736452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.736484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.736675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.736691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.736792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.736806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.736893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.736906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.737055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.072 [2024-07-15 08:03:47.737070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.072 qpair failed and we were unable to recover it. 00:28:03.072 [2024-07-15 08:03:47.737145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.737159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.738082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.738111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.738276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.738293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.738484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.738515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.738769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.738800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.738931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.738962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.739156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.739187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.739392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.739409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.739575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.739606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.739785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.739816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.739929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.739961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.740140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.740171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.740375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.740406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.740529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.740560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.740707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.740722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.740805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.740819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.740957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.740974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.741120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.741135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.741223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.741247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.741324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.741339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.741492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.741527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.741713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.741744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.741933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.741964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.742083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.742115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.742310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.742325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.742486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.742501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.742590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.742604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.742759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.742773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.742876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.742891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.742985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.743000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.743085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.743099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.743181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.743195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.743282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.743297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.743397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.743411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.743501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.743516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.743595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.743609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.743751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.743766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.743861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.743876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.744089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.744104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.744206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.744220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.744333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.073 [2024-07-15 08:03:47.744349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.073 qpair failed and we were unable to recover it. 00:28:03.073 [2024-07-15 08:03:47.744501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.744516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.744597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.744611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.744720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.744736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.744901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.744932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.745080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.745111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.745245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.745277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.745391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.745425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.745559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.745573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.745740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.745771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.745884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.745915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.746109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.746139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.746256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.746289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.746412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.746427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.746517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.746531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.746626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.746640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.746732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.746747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.746986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.747002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.747153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.747185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.747326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.747357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.747600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.747631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.747823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.747855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.747979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.748009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.748185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.748216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.748341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.748372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.748558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.748573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.748792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.748823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.748955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.748986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.749197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.749297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.749419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.749435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.749514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.749528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.749672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.749688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.749765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.749778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.749860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.749875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.749969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.749998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.750121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.750152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.750338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.750371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.750505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.750537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.750653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.750669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.750743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.750758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.751997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.752027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.752200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.074 [2024-07-15 08:03:47.752217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.074 qpair failed and we were unable to recover it. 00:28:03.074 [2024-07-15 08:03:47.752462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.752478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.752586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.752600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.752733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.752751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.752875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.752906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.753123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.753154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.753277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.753310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.753453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.753484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.753606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.753637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.753764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.753795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.753913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.753945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.754150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.754181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.754371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.754402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.754598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.754629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.754750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.754765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.754941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.754972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.755103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.755134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.755252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.755285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.755395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.755425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.755588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.755604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.755687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.755702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.755846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.755861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.755963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.755977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.756079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.756093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.756169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.756183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.756292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.756306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.756398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.756411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.756508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.756522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.756689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.756705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.756809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.756838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.756951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.756979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.757165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.757197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.757464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.757542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.757682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.757716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.757908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.757941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.758066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.758098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.758221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.075 [2024-07-15 08:03:47.758264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:03.075 qpair failed and we were unable to recover it. 00:28:03.075 [2024-07-15 08:03:47.758393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.758422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.758610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.758629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.758775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.758790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.758874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.758889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.758964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.758978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.759067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.759082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.759174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.759188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.759295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.759312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.759419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.759434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.759638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.759654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.759794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.759809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.761161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.761189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.761391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.761408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.761586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.761603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.761712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.761728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.761843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.761858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.762069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.762084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.762236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.762253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.762364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.762395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.762518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.762550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.762666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.762697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.762908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.762940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.763067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.763098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.763222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.763267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.763392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.763423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.763548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.763579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.763693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.763723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.763825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.763839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.763919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.763933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.764037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.764067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.764313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.764345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.764471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.764502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.764636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.764651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.764763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.764778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.765971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.766002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.766168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.766186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.766372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.766388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.766538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.766553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.766650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.766679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.766879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.766910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.767055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.076 [2024-07-15 08:03:47.767086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.076 qpair failed and we were unable to recover it. 00:28:03.076 [2024-07-15 08:03:47.767334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.767366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.767503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.767534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.767650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.767681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.767935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.767966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.768175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.768205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.768404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.768420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.768556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.768586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.768731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.768763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.768870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.768901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.769170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.769201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.769419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.769436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.769525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.769539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.769624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.769638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.769731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.769746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.769841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.769855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.769934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.769948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.770094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.770110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.770280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.770312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.770454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.770485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.770602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.770633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.770748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.770785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.770967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.770997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.771114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.771145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.771354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.771386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.771586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.771616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.771747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.771777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.771888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.771919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.772061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.772092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.772210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.772253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.772380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.772410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.772514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.772530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.772671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.772701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.772952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.772983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.773110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.773140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.773298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.773331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.773511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.773541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.773652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.773667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.774263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.774289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.774397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.774413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.774588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.774605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.774805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.077 [2024-07-15 08:03:47.774836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.077 qpair failed and we were unable to recover it. 00:28:03.077 [2024-07-15 08:03:47.775046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.775077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.775193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.775222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.775363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.775394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.775581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.775612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.775738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.775768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.775955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.775985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.776121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.776152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.776361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.776395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.776598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.776629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.776824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.776839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.776987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.777023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.777152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.777183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.777308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.777340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.777534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.777564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.777744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.777759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.777849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.777864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.778047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.778063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.778213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.778234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.778398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.778428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.778616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.778646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.778825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.778861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.779139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.779170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.779429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.779461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.779650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.779694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.779844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.779860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.779959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.779973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.780126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.780143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.780237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.780251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.780340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.780354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.780428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.780442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.780639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.780671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.780798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.780829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.781032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.781063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.781203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.781246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.781386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.781423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.781587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.781604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.781688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.781721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.781909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.781942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.782074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.782105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.782292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.782327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.782516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.078 [2024-07-15 08:03:47.782532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.078 qpair failed and we were unable to recover it. 00:28:03.078 [2024-07-15 08:03:47.782617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.782631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.782747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.782778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.782914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.782946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.783131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.783162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.783299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.783331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.783541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.783573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.783784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.783800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.783891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.783906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.783997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.784012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.784098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.784113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.784205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.784220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.784332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.784348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.784547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.784562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.784720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.784735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.784847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.784879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.785013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.785045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.785177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.785208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.785405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.785438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.785569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.785600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.785785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.785816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.786007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.786040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.786263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.786295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.786480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.786511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.786656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.786672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.786836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.786851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.787000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.787015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.787241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.787273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.787398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.787429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.787550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.787581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.787759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.787775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.787889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.787905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.788065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.788080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.788185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.788216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.788348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.788378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.788494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.788525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.079 [2024-07-15 08:03:47.788629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.079 [2024-07-15 08:03:47.788660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.079 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.788833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.788848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.788946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.788960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.789041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.789055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.789211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.789232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.789311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.789325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.789436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.789451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.789599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.789614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.789766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.789796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.789990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.790020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.790219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.790283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.790458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.790474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.790563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.790580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.790663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.790677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.790756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.790770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.790847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.790862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.790946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.790962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.791056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.791070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.791156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.791170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.791266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.791280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.791432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.791447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.791595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.791625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.791741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.791772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.791963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.791993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.792113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.792144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.792318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.792350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.792467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.792498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.792638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.792654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.792797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.792812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.792898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.792913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.793018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.793033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.793119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.793134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.793209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.793223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.793303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.793317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.793455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.793470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.793629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.793643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.793756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.793787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.793917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.793947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.794130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.794160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.794299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.794332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.794519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.080 [2024-07-15 08:03:47.794551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.080 qpair failed and we were unable to recover it. 00:28:03.080 [2024-07-15 08:03:47.794673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.794688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.794768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.794781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.794865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.794878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.794958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.794973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.795061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.795074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.795152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.795166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.795262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.795277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.795354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.795368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.796183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.796210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.796310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.796327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.796478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.796493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.796606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.796621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.797715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.797740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.797919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.797936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.798102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.798118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.798217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.798240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.798902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.798928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.799022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.799038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.799189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.799205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.799384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.799400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.799491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.799506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.799596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.799628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.799895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.799926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.800108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.800138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.800261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.800293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.800423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.800453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.800681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.800712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.800820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.800835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.800983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.801016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.801144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.801174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.801362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.801394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.801594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.801625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.801754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.801784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.801915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.801946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.802060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.802090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.802214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.802300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.802497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.802528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.802668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.802699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.802817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.802848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.802975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.803011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.081 [2024-07-15 08:03:47.803140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.081 [2024-07-15 08:03:47.803172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.081 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.804077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.804103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.804281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.804297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.804380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.804395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.804495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.804510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.805222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.805255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.805397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.805430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.805679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.805710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.805920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.805952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.806071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.806102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.806306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.806338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.806475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.806506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.806621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.806652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.806783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.806814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.806945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.806976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.807094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.807125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.807259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.807291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.807412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.807428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.807643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.807658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.807752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.807767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.807862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.807877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.807970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.807985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.808069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.808083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.808414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.808432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.808596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.808612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.808703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.808717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.808797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.808811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.808909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.808924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.809003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.809017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.809101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.809129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.809269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.809301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.082 [2024-07-15 08:03:47.809422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.082 [2024-07-15 08:03:47.809453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.082 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-15 08:03:47.809665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-15 08:03:47.809696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-15 08:03:47.809945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-15 08:03:47.809977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-15 08:03:47.810120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-15 08:03:47.810152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-15 08:03:47.810283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-15 08:03:47.810315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-15 08:03:47.810445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-15 08:03:47.810476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-15 08:03:47.810671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-15 08:03:47.810701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-15 08:03:47.810824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-15 08:03:47.810856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-15 08:03:47.810995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-15 08:03:47.811025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-15 08:03:47.811211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-15 08:03:47.811258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-15 08:03:47.811448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-15 08:03:47.811463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-15 08:03:47.811552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-15 08:03:47.811567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-15 08:03:47.811651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-15 08:03:47.811665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-15 08:03:47.811827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-15 08:03:47.811843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-15 08:03:47.811931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-15 08:03:47.811945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-15 08:03:47.812041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-15 08:03:47.812056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-15 08:03:47.812142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-15 08:03:47.812157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-15 08:03:47.812252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-15 08:03:47.812267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-15 08:03:47.812401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-15 08:03:47.812416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-15 08:03:47.812509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-15 08:03:47.812524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-15 08:03:47.812600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-15 08:03:47.812614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-15 08:03:47.812724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-15 08:03:47.812753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-15 08:03:47.812876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.354 [2024-07-15 08:03:47.812907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.354 qpair failed and we were unable to recover it. 00:28:03.354 [2024-07-15 08:03:47.813104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.813134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.813280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.813312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.813432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.813462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.813589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.813604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.813687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.813702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.813802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.813832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.813935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.813966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.814102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.814133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.814343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.814374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.814554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.814584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.814722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.814753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.814867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.814898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.815093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.815124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.815252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.815289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.815408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.815438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.815637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.815667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.815792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.815824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.816018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.816049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.816165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.816196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.816436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.816509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.816755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.816821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.817024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.817046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.817215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.817243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.817439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.817470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.817594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.817624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.817751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.817782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.817969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.818000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.818203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.818245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.818440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.818472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.818717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.818748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.818868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.818899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.819022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.819053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.819246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.819278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.819476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.819508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.819642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.819663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.819861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.819892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.820005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.820035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.820236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.820268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.820468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.355 [2024-07-15 08:03:47.820499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.355 qpair failed and we were unable to recover it. 00:28:03.355 [2024-07-15 08:03:47.820609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.820630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.820788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.820840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.821060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.821100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.821247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.821282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.821408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.821440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.821640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.821671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.821880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.821912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.822050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.822072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.822311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.822346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.822484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.822499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.822653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.822668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.822751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.822764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.822852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.822867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.822941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.822955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.823111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.823126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.823240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.823273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.823396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.823427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.823539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.823570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.823753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.823784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.823965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.823995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.824185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.824216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.824374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.824405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.824654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.824685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.824863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.824893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.825002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.825033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.825319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.825351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.825593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.825623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.825754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.825784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.825979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.826015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.826222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.826280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.826459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.826490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.826601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.826631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.826760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.826790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.826937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.826968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.827149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.827179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.827305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.827337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.827481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.827511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.827764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.827795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.827976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.827992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.828087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.828102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.356 qpair failed and we were unable to recover it. 00:28:03.356 [2024-07-15 08:03:47.828263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.356 [2024-07-15 08:03:47.828279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.828441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.828456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.828569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.828584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.828743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.828773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.828946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.828977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.829177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.829207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.829405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.829436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.829698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.829728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.829854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.829885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.830031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.830061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.830195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.830243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.830530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.830573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.830786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.830802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.831012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.831027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.831126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.831157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.831344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.831377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.831564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.831596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.831775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.831807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.831998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.832029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.832215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.832255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.832455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.832486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.832668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.832684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.832761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.832774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.832879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.832910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.833037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.833068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.833264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.833295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.833473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.833504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.833631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.833662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.833841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.833872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.834068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.834099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.834284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.834316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.834569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.834600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.834813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.834844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.835114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.835144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.835255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.835287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.835472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.835503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.835620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.835651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.835784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.835799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.835962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.835993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.836207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.836246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.836375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.357 [2024-07-15 08:03:47.836405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.357 qpair failed and we were unable to recover it. 00:28:03.357 [2024-07-15 08:03:47.836533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.836564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.836736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.836752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.836899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.836915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.836999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.837013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.837103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.837118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.837278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.837294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.837529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.837545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.837662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.837693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.837873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.837903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.838028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.838062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.838322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.838353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.838541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.838572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.838690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.838720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.838854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.838869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.839031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.839061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.839250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.839288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.839484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.839515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.839707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.839722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.839886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.839916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.840160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.840191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.840454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.840469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.840564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.840577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.840754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.840784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.840977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.841008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.841185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.841215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.841426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.841457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.841633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.841664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.841764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.841794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.841966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.841982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.842136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.842168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.842363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.842396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.842534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.842564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.358 qpair failed and we were unable to recover it. 00:28:03.358 [2024-07-15 08:03:47.842700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.358 [2024-07-15 08:03:47.842714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.842809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.842825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.843003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.843034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.843239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.843272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.843386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.843416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.843625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.843640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.843852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.843883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.844172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.844203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.844460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.844492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.844693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.844708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.844856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.844886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.845040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.845070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.845254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.845286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.845429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.845460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.845703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.845733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.845937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.845968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.846144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.846174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.846381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.846414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.846557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.846588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.846717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.846748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.846865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.846880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.847095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.847125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.847371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.847403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.847640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.847671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.847867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.847898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.848091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.848122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.848300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.848332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.848604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.848635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.848822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.848836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.848997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.849012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.849182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.849213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.849425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.849456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.849700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.849739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.849883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.849898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.850111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.850127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.850448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.850464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.850552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.850566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.850719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.850734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.850896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.850912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.851068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.851099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.359 qpair failed and we were unable to recover it. 00:28:03.359 [2024-07-15 08:03:47.851238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.359 [2024-07-15 08:03:47.851271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.851531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.851567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.851665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.851680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.851783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.851797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.851937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.851952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.852036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.852064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.852258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.852289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.852401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.852432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.852553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.852584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.852836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.852852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.853069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.853085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.853174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.853190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.853287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.853301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.853387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.853422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.853596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.853627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.853837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.853868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.854046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.854077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.854269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.854302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.854482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.854513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.854725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.854740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.854842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.854856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.855113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.855128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.855314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.855346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.855598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.855628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.855814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.855845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.855993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.856024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.856247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.856280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.856473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.856504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.856681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.856711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.856970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.856985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.857086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.857114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.857356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.857388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.857523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.857553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.857701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.857732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.857930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.857960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.858159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.858191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.858348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.858380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.858563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.858593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.858771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.858787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.858967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.360 [2024-07-15 08:03:47.858999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.360 qpair failed and we were unable to recover it. 00:28:03.360 [2024-07-15 08:03:47.859126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.859156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.859291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.859324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.859558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.859590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.859817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.859856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.859965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.859981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.860124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.860160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.860344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.860376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.860562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.860592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.860700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.860731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.860919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.860951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.861174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.861205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.861433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.861465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.861598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.861617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.861825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.861840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.861905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.861920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.862064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.862078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.862166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.862180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.862351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.862383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.862576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.862606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.862733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.862764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.863003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.863018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.863175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.863206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.863347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.863379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.863610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.863642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.863836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.863867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.864043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.864058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.864145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.864159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.864308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.864324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.864500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.864531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.864709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.864739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.865001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.865032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.865235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.865267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.865475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.865505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.865714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.865745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.865939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.865970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.866154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.866185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.866401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.866433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.866611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.866627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.866720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.866734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.866827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.866845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.361 qpair failed and we were unable to recover it. 00:28:03.361 [2024-07-15 08:03:47.866971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.361 [2024-07-15 08:03:47.866987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.867169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.867185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.867398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.867414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.867572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.867587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.867671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.867685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.867845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.867875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.868057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.868088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.868281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.868313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.868498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.868529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.868706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.868722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.868814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.868829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.868968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.868984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.869194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.869209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.869362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.869394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.869669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.869699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.869883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.869913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.870110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.870125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.870352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.870385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.870652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.870684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.870930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.870961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.871074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.871105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.871371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.871403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.871555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.871586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.871806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.871837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.871953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.871969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.872194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.872236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.872505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.872536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.872853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.872884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.873134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.873165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.873451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.873483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.873664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.873695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.873919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.873963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.874107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.874122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.874217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.874247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.874350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.874366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.874527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.874542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.874636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.874649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.874767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.874798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.875018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.875049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.875262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.875294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.875570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.362 [2024-07-15 08:03:47.875613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.362 qpair failed and we were unable to recover it. 00:28:03.362 [2024-07-15 08:03:47.875871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.875906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.876109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.876140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.876287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.876318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.876561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.876592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.876861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.876892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.877183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.877214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.877370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.877402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.877652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.877683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.877926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.877957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.878133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.878162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.878358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.878390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.878583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.878613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.878790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.878821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.879068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.879099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.879352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.879384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.879678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.879709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.879919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.879950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.880194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.880233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.880430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.880461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.880589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.880603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.880827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.880858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.881005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.881036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.881168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.881199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.881473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.881551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.881804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.881875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.882092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.882126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.882315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.882335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.882422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.882436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.882667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.882682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.882838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.882854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.883006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.883037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.883170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.883201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.883465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.883497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.883761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.883777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.883998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.363 [2024-07-15 08:03:47.884013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.363 qpair failed and we were unable to recover it. 00:28:03.363 [2024-07-15 08:03:47.884117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.884131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.884390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.884422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.884667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.884698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.884893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.884923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.885103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.885134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.885332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.885363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.885607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.885637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.885833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.885863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.886066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.886098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.886277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.886309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.886560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.886590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.886885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.886916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.887207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.887246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.887460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.887490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.887778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.887810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.887992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.888024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.888296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.888328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.888591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.888622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.888865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.888896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.889086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.889117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.889397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.889429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.889733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.889763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.889983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.890014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.890155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.890186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.890452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.890484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.890727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.890758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.891019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.891034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.891177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.891192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.891412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.891445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.891658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.891689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.891945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.891977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.892246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.892278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.892524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.892567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.892742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.892756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.892990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.893020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.893164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.893195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.893440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.893511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.893800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.893835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.894029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.894062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.894349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.894384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.364 qpair failed and we were unable to recover it. 00:28:03.364 [2024-07-15 08:03:47.894662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.364 [2024-07-15 08:03:47.894692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.894902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.894918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.895025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.895039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.895273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.895306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.895503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.895534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.895771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.895786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.896019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.896034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.896196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.896211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.896365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.896380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.896565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.896580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.896790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.896805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.897021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.897036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.897214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.897234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.897345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.897360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.897581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.897611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.897825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.897856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.898127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.898157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.898420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.898453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.898637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.898668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.898882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.898918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.899193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.899236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.899457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.899488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.899750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.899781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.900070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.900101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.900386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.900418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.900687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.900718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.900950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.900981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.901236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.901268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.901538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.901568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.901857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.901887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.902158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.902173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.902256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.902271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.902505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.902520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.902721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.902737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.903013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.903028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.903179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.903194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.903436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.903468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.903672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.903702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.903971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.904002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.904178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.904194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.904437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.365 [2024-07-15 08:03:47.904469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.365 qpair failed and we were unable to recover it. 00:28:03.365 [2024-07-15 08:03:47.904715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.904747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.905009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.905040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.905241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.905274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.905410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.905441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.905690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.905720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.906011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.906042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.906323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.906355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.906637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.906667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.906796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.906811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.906983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.906998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.907259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.907291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.907562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.907592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.907765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.907781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.907927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.907942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.908160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.908191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.908484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.908516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.908797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.908828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.909100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.909132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.909325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.909356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.909614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.909661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.909899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.909915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.910127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.910142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.910405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.910421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.910603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.910619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.910895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.910925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.911119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.911150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.911447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.911479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.911707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.911738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.911866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.911897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.912075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.912105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.912366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.912398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.912530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.912561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.912759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.912790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.912975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.913006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.913214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.913258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.913535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.913567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.913812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.913843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.914098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.914114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.914270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.914286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.914466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.914482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.914744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.366 [2024-07-15 08:03:47.914775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.366 qpair failed and we were unable to recover it. 00:28:03.366 [2024-07-15 08:03:47.915041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.915072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.915262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.915295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.915513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.915544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.915809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.915840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.916128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.916173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.916309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.916345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.916542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.916573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.916844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.916874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.917007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.917023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.917235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.917252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.917356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.917372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.917582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.917612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.917799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.917830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.918030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.918073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.918306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.918323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.918574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.918590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.918736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.918752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.918987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.919002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.919254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.919270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.919486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.919502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.919665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.919680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.919856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.919887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.920110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.920140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.920365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.920397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.920641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.920672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.920875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.920890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.921127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.921143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.921312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.921344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.921596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.921627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.921872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.921916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.922106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.922122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.922350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.922366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.922592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.922622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.922893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.922925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.923108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.923139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.367 [2024-07-15 08:03:47.923387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.367 [2024-07-15 08:03:47.923419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.367 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.923633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.923664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.923848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.923864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.924128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.924143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.924308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.924324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.924492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.924523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.924737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.924768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.925014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.925045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.925255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.925287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.925511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.925542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.925817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.925848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.926090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.926107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.926295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.926312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.926548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.926564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.926746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.926761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.927005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.927038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.927312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.927344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.927563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.927594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.927868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.927884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.928034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.928049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.928297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.928314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.928531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.928562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.928819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.928851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.929111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.929127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.929292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.929324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.929520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.929552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.929801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.929831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.930106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.930137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.930334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.930367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.930622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.930653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.930925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.930965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.931130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.931145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.931372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.931404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.931616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.931647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.931861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.931877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.932094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.932125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.932332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.932364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.932634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.932665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.932915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.932951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.933212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.933252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.368 [2024-07-15 08:03:47.933549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.368 [2024-07-15 08:03:47.933580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.368 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.933778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.933809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.934058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.934089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.934362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.934394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.934615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.934647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.934892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.934923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.935193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.935234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.935476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.935507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.935759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.935790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.936100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.936131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.936374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.936419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.936604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.936635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.936776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.936808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.937076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.937092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.937341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.937357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.937523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.937538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.937698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.937729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.937912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.937943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.938190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.938221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.938413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.938428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.938643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.938659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.938875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.938906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.939156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.939187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.939337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.939368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.939552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.939583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.939830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.939861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.940092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.940124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.940358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.940390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.940575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.940605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.940782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.940797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.940942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.940958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.941149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.941180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.941465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.941497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.941697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.941727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.942010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.942025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.942179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.942195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.942344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.942361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.942582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.942613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.942803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.942834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.943114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.943156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.369 [2024-07-15 08:03:47.943318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.369 [2024-07-15 08:03:47.943334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.369 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.943554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.943585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.943830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.943846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.944006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.944021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.944260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.944293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.944602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.944634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.944779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.944810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.945033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.945064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.945330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.945346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.945560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.945575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.945764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.945780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.945946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.945962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.946212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.946262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.946471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.946501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.946701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.946733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.946935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.946966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.947171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.947186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.947415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.947431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.947579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.947595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.947750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.947786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.948054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.948085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.948214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.948257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.948532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.948563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.948829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.948860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.949151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.949183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.949393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.949408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.949495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.949512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.949709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.949725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.949932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.949948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.950163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.950178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.950265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.950280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.950445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.950460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.950625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.950656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.950902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.950933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.951107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.951123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.951349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.951364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.951646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.951677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.951808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.951840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.952153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.952183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.952408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.952424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.370 [2024-07-15 08:03:47.952644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.370 [2024-07-15 08:03:47.952659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.370 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.952816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.952849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.953124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.953155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.953378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.953411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.953563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.953595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.953821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.953852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.954076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.954107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.954430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.954447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.954635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.954651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.954917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.954933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.955099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.955130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.955381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.955414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.955600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.955631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.955823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.955854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.956059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.956091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.956285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.956316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.956503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.956534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.956743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.956774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.956972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.957003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.957267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.957300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.957447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.957478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.957659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.957690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.957977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.958009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.958206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.958249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.958477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.958509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.958650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.958681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.958932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.958962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.959209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.959251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.959468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.959486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.959735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.959768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.959951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.959982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.960261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.960293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.960564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.960597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.960791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.960807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.960894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.960909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.961149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.961165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.961268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.961283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.371 [2024-07-15 08:03:47.961503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.371 [2024-07-15 08:03:47.961578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.371 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.961813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.961848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.962073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.962107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.962284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.962319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.962639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.962683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.962945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.962976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.963246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.963280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.963531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.963562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.963816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.963847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.964086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.964103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.964384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.964416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.964625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.964657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.964922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.964937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.965103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.965119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.965291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.965335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.965488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.965518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.965652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.965684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.965913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.965952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.966238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.966269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.966525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.966557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.966830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.966862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.966995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.967026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.967277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.967309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.967500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.967533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.967782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.967814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.968115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.968146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.968344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.968376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.968608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.968639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.968823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.968854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.969147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.969178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.969451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.969468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.969631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.969647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.969813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.969844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.970151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.970183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.970353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.970370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.970614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.970631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.970876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.372 [2024-07-15 08:03:47.970908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.372 qpair failed and we were unable to recover it. 00:28:03.372 [2024-07-15 08:03:47.971103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.971133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.971321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.971353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.971628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.971659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.971927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.971958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.972249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.972282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.972582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.972614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.972813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.972844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.973063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.973079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.973328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.973344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.973531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.973547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.973769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.973800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.974001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.974017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.974198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.974214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.974389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.974406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.974641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.974657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.974821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.974837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.974940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.974955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.975107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.975141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.975302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.975333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.975651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.975683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.975933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.975949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.976201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.976249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.976557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.976587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.976779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.976809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.977005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.977037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.977342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.977374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.977584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.977617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.977758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.977790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.978074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.978106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.978298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.978331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.978613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.978645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.978851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.978883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.979137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.979168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.979453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.979486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.979741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.979772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.980062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.980093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.980314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.980330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.980555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.980587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.980801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.980832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.981030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.373 [2024-07-15 08:03:47.981061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.373 qpair failed and we were unable to recover it. 00:28:03.373 [2024-07-15 08:03:47.981381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.981413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.981601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.981632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.981837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.981869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.982111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.982143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.982407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.982424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.982646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.982662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.982849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.982881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.983016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.983031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.983192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.983211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.983453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.983469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.983641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.983657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.983915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.983947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.984161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.984193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.984399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.984431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.984574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.984606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.984798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.984830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.985017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.985048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.985177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.985193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.985414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.985431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.985523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.985538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.985693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.985709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.985952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.985969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.986138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.986155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.986327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.986343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.986508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.986524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.986746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.986762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.986983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.986999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.987291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.987307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.987493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.987525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.987730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.987762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.988020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.988052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.988258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.988291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.988506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.988538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.988761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.988792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.989010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.989042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.989356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.989389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.989675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.989707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.989965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.990008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.990265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.374 [2024-07-15 08:03:47.990311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.374 qpair failed and we were unable to recover it. 00:28:03.374 [2024-07-15 08:03:47.990447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.990479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.990733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.990764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.990916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.990947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.991132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.991148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.991399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.991432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.991623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.991655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.991932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.991963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.992156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.992187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.992472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.992505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.992819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.992851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.993126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.993147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.993393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.993410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.993584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.993599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.993788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.993805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.994045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.994061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.994343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.994376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.994575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.994607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.994732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.994763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.994953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.994970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.995223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.995265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.995415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.995447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.995666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.995698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.995887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.995903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.996070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.996086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.996260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.996279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.996534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.996566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.996826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.996858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.997132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.997163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.997426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.997459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.997667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.997700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.997950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.997982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.998129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.998161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.998303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.998320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.998422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.998437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.998526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.998540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.998728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.998744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.998919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.998936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.999028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.999067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.999305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.999340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.999538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.375 [2024-07-15 08:03:47.999570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.375 qpair failed and we were unable to recover it. 00:28:03.375 [2024-07-15 08:03:47.999848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:47.999880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.000173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.000205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.000405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.000438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.000688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.000719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.000911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.000943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.001082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.001115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.001330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.001363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.001640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.001656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.001758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.001772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.001945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.001961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.002075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.002107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.002318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.002352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.002543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.002574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.002787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.002820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.003101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.003133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.003412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.003428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.003662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.003678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.003867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.003900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.004093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.004125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.004398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.004430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.004702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.004734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.004965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.004997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.005198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.005214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.005340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.005372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.005655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.005688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.005904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.005937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.006222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.006246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.006420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.006437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.006540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.006555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.006753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.006770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.006929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.006961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.007203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.007255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.007516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.007548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.007698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.007730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.376 [2024-07-15 08:03:48.008001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.376 [2024-07-15 08:03:48.008033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.376 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.008178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.008211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.008416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.008449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.008736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.008768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.008927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.008967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.009241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.009259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.009423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.009440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.009685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.009702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.009894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.009926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.010069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.010100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.010359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.010391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.010586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.010618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.010824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.010856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.011082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.011098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.011323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.011340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.011533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.011550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.011778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.011794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.012049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.012081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.012280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.012312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.012517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.012549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.012753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.012785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.013069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.013102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.013255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.013272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.013447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.013479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.013704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.013736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.014003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.014035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.014269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.014287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.014479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.014511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.014769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.014801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.014955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.014988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.015130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.015148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.015314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.015331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.015496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.015512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.015673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.015689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.015855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.015873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.015995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.016011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.016118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.016135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.016374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.016392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.016483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.016498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.016677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.016694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.377 [2024-07-15 08:03:48.016912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.377 [2024-07-15 08:03:48.016929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.377 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.017117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.017155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.017395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.017428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.017690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.017722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.018037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.018081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.018333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.018354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.018533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.018550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.018797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.018814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.019102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.019134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.019368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.019402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.019613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.019629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.019827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.019859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.020070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.020102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.020363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.020397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.020630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.020662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.020931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.020962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.021217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.021241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.021475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.021491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.021745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.021761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.021854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.021870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.022068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.022085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.022260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.022278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.022446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.022463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.022664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.022697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.023034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.023066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.023298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.023315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.023489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.023506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.023660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.023678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.023861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.023878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.024057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.024074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.024300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.024318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.024565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.024598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.024870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.024907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.025117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.025149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.025401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.025419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.025606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.025638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.025780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.025811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.026094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.026125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.026359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.026394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.026530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.026561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.026698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.026729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.378 qpair failed and we were unable to recover it. 00:28:03.378 [2024-07-15 08:03:48.027009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.378 [2024-07-15 08:03:48.027041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.027166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.027196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.027407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.027425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.027552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.027569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.027738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.027755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.027992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.028009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.028167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.028184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.028284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.028300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.028487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.028504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.028642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.028675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.028808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.028840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.029125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.029157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.029309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.029343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.029625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.029661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.029948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.029980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.030247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.030280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.030427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.030458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.030644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.030676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.030908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.030940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.031159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.031176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.031409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.031427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.031662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.031678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.031841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.031873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.032186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.032202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.032375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.032392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.032626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.032658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.032849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.032881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.033095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.033128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.033279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.033296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.033546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.033563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.033788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.033805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.033963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.033979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.034133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.034153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.034332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.034349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.034535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.034567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.034779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.034811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.035004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.035036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.035263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.035280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.035461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.035478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.035677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.035709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.379 [2024-07-15 08:03:48.035921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.379 [2024-07-15 08:03:48.035953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.379 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.036205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.036222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.036381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.036398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.036667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.036683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.036914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.036946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.037179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.037211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.037524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.037541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.037704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.037721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.037968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.037985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.038115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.038152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.038306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.038339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.038519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.038552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.038815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.038847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.039037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.039054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.039307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.039340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.039484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.039516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.039712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.039744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.040033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.040064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.040223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.040267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.040474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.040495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.040679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.040710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.040905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.040937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.041147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.041179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.041324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.041357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.041574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.041605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.041824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.041856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.042164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.042196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.042417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.042456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.042573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.042589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.042783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.042800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.043055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.043071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.043252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.043286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.043473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.043505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.043737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.043770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.044031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.044065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.044298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.380 [2024-07-15 08:03:48.044315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.380 qpair failed and we were unable to recover it. 00:28:03.380 [2024-07-15 08:03:48.044565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.044597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.044860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.044892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.045194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.045236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.045446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.045478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.045715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.045747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.046028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.046060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.047248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.047282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.047578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.047596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.047825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.047842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.048098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.048115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.048274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.048293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.048444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.048462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.048658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.048675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.048843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.048861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.049056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.049073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.049261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.049279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.049511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.049543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.049823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.049854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.050180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.050212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.050451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.050487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.050703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.050735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.050900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.050933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.051162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.051193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.051490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.051523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.051680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.051719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.051956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.051989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.052124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.052158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.052430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.052465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.052622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.052655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.053831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.053869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.054183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.054202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.054442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.054461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.055314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.055341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.056100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.056119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.056243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.056258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.056509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.056529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.056638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.056653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.056923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.056940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.057108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.057124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.057321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.057338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.381 qpair failed and we were unable to recover it. 00:28:03.381 [2024-07-15 08:03:48.057525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.381 [2024-07-15 08:03:48.057542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.057741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.057758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.057956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.057973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.058168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.058185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.058279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.058296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.058398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.058412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.058511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.058527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.058659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.058674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.058809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.058827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.058986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.059004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.059198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.059241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.059388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.059427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.059569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.059606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.059831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.059863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.060026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.060044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.060222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.060262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.060467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.060484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.060648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.060682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.060833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.060866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.061078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.061111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.061329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.061348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.061589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.061607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.061878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.061896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.062075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.062108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.062265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.062283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.062725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.062750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.062956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.062974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.063172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.063189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.063359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.063377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.063620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.063637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.063746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.063764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.063918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.063935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.064184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.064201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.064333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.064350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.064507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.064525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.064695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.064711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.064855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.064887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.065082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.065116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.065338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.065372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.065630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.065647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.382 [2024-07-15 08:03:48.065903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.382 [2024-07-15 08:03:48.065937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.382 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.066131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.066164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.066371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.066405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.066607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.066624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.066863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.066896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.067157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.067188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.067502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.067519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.067684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.067701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.067876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.067894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.068147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.068163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.068318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.068335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.068522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.068554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.068700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.068738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.069020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.069052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.069272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.069290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.069455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.069473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.069679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.069695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.069921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.069938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.070158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.070190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.070434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.070468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.070659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.070676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.070762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.070777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.070935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.070951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.071154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.071172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.071276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.071292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.071420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.071437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.071568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.071584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.071706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.071723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.071838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.071853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.072016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.072033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.072186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.072203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.072405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.072439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.072631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.072664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.072859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.072892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.073085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.073119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.073319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.073335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.073422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.073438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.073639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.073671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.073881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.073914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.074174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.074208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.074431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.383 [2024-07-15 08:03:48.074464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.383 qpair failed and we were unable to recover it. 00:28:03.383 [2024-07-15 08:03:48.074677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.074694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.074880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.074896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.075130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.075162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.075388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.075422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.075556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.075589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.075852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.075884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.076134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.076167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.076421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.076454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.076586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.076619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.076897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.076929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.077211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.077257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.077537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.077569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.077789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.077822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.077975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.078007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.078260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.078277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.078470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.078502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.078808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.078840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.079044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.079076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.079221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.079264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.079462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.079479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.079570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.079585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.079882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.079898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.080087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.080119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.080263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.080296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.080553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.080585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.080744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.080777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.080996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.081027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.081287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.081305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.081557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.081590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.081829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.081861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.082048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.082080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.082356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.082374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.082504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.082521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.082635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.082652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.082851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.082868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.083033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.083051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.083288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.083321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.083460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.083491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.083702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.083734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.083966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.084004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.384 [2024-07-15 08:03:48.084220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.384 [2024-07-15 08:03:48.084281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.384 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.084600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.084632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.084905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.084941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.085152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.085183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.085466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.085500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.085659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.085692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.085935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.085967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.086108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.086123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.086242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.086260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.086437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.086455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.086687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.086720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.087010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.087041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.087272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.087306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.087447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.087463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.087648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.087664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.087794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.087810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.088093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.088110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.088373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.088391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.088567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.088606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.088729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.088761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.089069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.089102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.089265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.089298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.089461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.089493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.089655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.089688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.089882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.089914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.090144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.090176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.090380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.090414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.090638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.090670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.090864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.090897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.091161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.091193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.091501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.091517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.091642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.091659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.091768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.091783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.091896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.091912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.092079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.092111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.385 [2024-07-15 08:03:48.092250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.385 [2024-07-15 08:03:48.092284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.385 qpair failed and we were unable to recover it. 00:28:03.386 [2024-07-15 08:03:48.092575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.386 [2024-07-15 08:03:48.092609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.386 qpair failed and we were unable to recover it. 00:28:03.386 [2024-07-15 08:03:48.092879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.386 [2024-07-15 08:03:48.092895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.386 qpair failed and we were unable to recover it. 00:28:03.386 [2024-07-15 08:03:48.093182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.386 [2024-07-15 08:03:48.093215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.386 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.093469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.093504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.093719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.093756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.093903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.093937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.094240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.094274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.094487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.094520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.094657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.094689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.094966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.094998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.095275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.095293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.095560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.095597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.095829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.095862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.096122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.096154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.096461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.096502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.096759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.096776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.096966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.096983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.097262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.097282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.097536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.097572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.097846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.097880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.098192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.098234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.098522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.098555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.098784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.098817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.099024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.099056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.099360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.099377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.099529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.099547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.099774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.099791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.100069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.100086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.100246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.100263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.100437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.100470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.100637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.100669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.100812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.100849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.101147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.101179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.101415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.101448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.101661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.101678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.662 qpair failed and we were unable to recover it. 00:28:03.662 [2024-07-15 08:03:48.101843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.662 [2024-07-15 08:03:48.101875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.102104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.102136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.102348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.102382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.102511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.102525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.102680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.102697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.102937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.102969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.103177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.103209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.103407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.103424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.103705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.103738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.103862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.103894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.104023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.104055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.104282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.104315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.104576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.104609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.104802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.104833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.105063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.105095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.105398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.105415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.105613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.105646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.105951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.105982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.106202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.106244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.106452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.106469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.106668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.106700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.106834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.106866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.106999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.107033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.107223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.107247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.107406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.107423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.107577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.107594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.107844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.107876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.108141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.108173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.108391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.108409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.108511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.108526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.108706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.108723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.108894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.108911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.109153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.109170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.109347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.109383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.109530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.109562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.109759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.109791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.109996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.110028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.110311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.110350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.110499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.110516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.110692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.110709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.663 [2024-07-15 08:03:48.110915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.663 [2024-07-15 08:03:48.110933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.663 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.111089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.111106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.111217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.111242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.111404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.111421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.111654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.111686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.111927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.111960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.112179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.112211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.112482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.112514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.112705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.112722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.112887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.112919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.113177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.113209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.113437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.113454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.113684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.113717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.113990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.114022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.114218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.114281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.114488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.114521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.114731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.114748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.115002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.115034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.115251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.115284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.115566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.115583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.115841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.115872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.116149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.116181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.116412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.116445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.116716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.116753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.117040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.117078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.117282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.117315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.117469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.117501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.117777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.117809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.118114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.118145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.118269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.118315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.118607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.118651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.118911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.118928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.119097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.119114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.119273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.119292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.119546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.119578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.119766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.119799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.120069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.120101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.120328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.120361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.120626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.120659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.120963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.120996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.664 [2024-07-15 08:03:48.121255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.664 [2024-07-15 08:03:48.121289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.664 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.121563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.121579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.121697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.121714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.121964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.121981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.122247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.122266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.122514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.122531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.122706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.122723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.122975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.123007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.123295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.123339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.123579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.123596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.123754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.123771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.124045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.124062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.124364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.124397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.124681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.124714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.124851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.124882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.125090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.125123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.125373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.125390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.125548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.125584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.125846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.125879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.126172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.126205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.126504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.126538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.126775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.126807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.127042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.127074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.127263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.127281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.127465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.127497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.127708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.127745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.128054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.128086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.128351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.128384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.128504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.128536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.128671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.128688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.128938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.128955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.129219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.129274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.129572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.129605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.129803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.129836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.130098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.130129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.130409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.130449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.130607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.130624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.130819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.130837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.131100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.131117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.131347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.131365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.131613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.665 [2024-07-15 08:03:48.131630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.665 qpair failed and we were unable to recover it. 00:28:03.665 [2024-07-15 08:03:48.131885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.131902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.132133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.132150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.132321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.132338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.132512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.132529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.132705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.132737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.132955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.132987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.133269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.133302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.133441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.133473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.133779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.133811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.134093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.134126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.134416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.134449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.134731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.134771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.134991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.135022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.135237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.135271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.135463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.135480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.135742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.135775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.136080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.136113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.136324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.136358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.136643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.136676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.136971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.137003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.137289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.137322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.137574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.137606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.137870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.137902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.138184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.138218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.138515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.138548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.138755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.138772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.138933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.138950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.139223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.139266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.139484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.139516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.139777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.139810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.140067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.140099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.140256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.140291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.140502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.666 [2024-07-15 08:03:48.140536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.666 qpair failed and we were unable to recover it. 00:28:03.666 [2024-07-15 08:03:48.140795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.140811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.141005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.141023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.141232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.141249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.141507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.141540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.141762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.141794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.142021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.142054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.142343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.142376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.142640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.142673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.142980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.143012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.143289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.143321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.143536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.143568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.143828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.143860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.144165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.144197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.144434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.144468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.144754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.144787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.145015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.145048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.145273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.145306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.145496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.145513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.145694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.145726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.146026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.146063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.146362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.146395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.146685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.146701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.146986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.147003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.147258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.147292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.147548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.147580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.147891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.147923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.148125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.148159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.148464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.148511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.148694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.148710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.148875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.148907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.149184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.149217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.149419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.149451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.149663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.149695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.149982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.150014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.150218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.150270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.150499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.150532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.150757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.150789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.151073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.151106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.151389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.151408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.151645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.151662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.667 [2024-07-15 08:03:48.151857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.667 [2024-07-15 08:03:48.151874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.667 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.152145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.152178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.152455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.152472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.152668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.152684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.152934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.152951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.153077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.153092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.153269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.153286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.153507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.153540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.153838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.153870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.154171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.154203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.154521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.154554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.154744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.154761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.155038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.155071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.155296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.155330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.155583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.155600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.155847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.155882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.156172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.156205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.156417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.156450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.156738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.156769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.157077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.157109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.157341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.157374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.157537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.157569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.157753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.157769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.157885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.157901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.158072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.158089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.158341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.158378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.158595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.158628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.158817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.158850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.159130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.159162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.159364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.159382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.159639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.159676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.159894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.159927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.160214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.160255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.160534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.160551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.160799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.160816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.160966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.160983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.161183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.161200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.161442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.161460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.161623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.161641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.161830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.161863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.162093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.668 [2024-07-15 08:03:48.162125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.668 qpair failed and we were unable to recover it. 00:28:03.668 [2024-07-15 08:03:48.162324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.162341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.162513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.162544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.162842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.162874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.163182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.163214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.163487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.163520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.163735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.163768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.164042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.164079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.164274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.164309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.164493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.164511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.164761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.164793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.165102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.165135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.165428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.165461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.165746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.165779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.165912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.165944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.166145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.166177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.166394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.166412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.166598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.166615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.166801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.166833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.167048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.167081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.167345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.167380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.167682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.167715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.167945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.167977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.168258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.168291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.168504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.168521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.168676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.168709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.168992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.169024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.169211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.169251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.169459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.169476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.169638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.169655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.169841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.169872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.170160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.170193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.170480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.170514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.170771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.170804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.171111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.171144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.171339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.171373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.171651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.171684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.171952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.171985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.172243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.172289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.172391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.172405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.172598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.669 [2024-07-15 08:03:48.172630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.669 qpair failed and we were unable to recover it. 00:28:03.669 [2024-07-15 08:03:48.172841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.172873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.173134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.173166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.173407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.173441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.173727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.173759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.174020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.174054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.174362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.174395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.174621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.174654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.174874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.174907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.175191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.175223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.175448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.175481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.175688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.175719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.175946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.175963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.176145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.176161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.176257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.176288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.176525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.176557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.176837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.176870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.177157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.177190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.177502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.177535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.177810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.177827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.177934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.177952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.178206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.178260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.178494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.178526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.178733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.178773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.178998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.179015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.179184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.179201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.179375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.179409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.179692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.179724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.179947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.179979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.180102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.180134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.180397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.180444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.180694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.180731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.181014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.181047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.181336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.181369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.181624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.181657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.181985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.182024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.670 [2024-07-15 08:03:48.182286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.670 [2024-07-15 08:03:48.182319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.670 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.182525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.182559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.182830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.182863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.183091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.183123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.183363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.183397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.183663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.183680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.183785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.183799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.184053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.184086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.184308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.184341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.184626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.184658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.184946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.184979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.185249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.185281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.185485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.185517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.185749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.185782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.186083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.186114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.186328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.186362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.186563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.186596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.186856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.186888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.187169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.187201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.187498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.187531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.187813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.187845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.188141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.188173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.188341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.188375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.188639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.188656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.188884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.188917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.189135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.189168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.189477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.189511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.189707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.189724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.189887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.189904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.190148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.190165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.190358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.190391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.190679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.190711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.190980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.191013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.191220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.191262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.191425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.191458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.191740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.191773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.191995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.192026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.192314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.192348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.192625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.192641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.192816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.192833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.671 [2024-07-15 08:03:48.193077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.671 [2024-07-15 08:03:48.193096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.671 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.193211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.193251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.193494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.193527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.193785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.193803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.194028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.194045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.194332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.194366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.194563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.194596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.194804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.194848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.195073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.195090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.195346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.195388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.195674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.195707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.195845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.195877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.196157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.196189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.196489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.196521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.196736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.196753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.196856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.196889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.197162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.197194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.197347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.197380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.197575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.197607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.197826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.197842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.198014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.198030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.198237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.198272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.198472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.198504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.198808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.198840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.199115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.199147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.199403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.199421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.199587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.199604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.199908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.199928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.200179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.200196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.200444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.200461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.200701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.200733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.200933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.200965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.201240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.201274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.201480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.201512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.201719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.201751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.201919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.201935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.202118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.202136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.202362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.202380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.202563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.202595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.202906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.202938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.203212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.672 [2024-07-15 08:03:48.203257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.672 qpair failed and we were unable to recover it. 00:28:03.672 [2024-07-15 08:03:48.203550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.203584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.203780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.203812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.204081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.204098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.204189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.204205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.204382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.204398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.204638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.204671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.204862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.204895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.205044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.205075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.205333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.205367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.205524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.205557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.205859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.205892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.206083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.206099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.206269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.206302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.206560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.206592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.206781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.206798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.206957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.206989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.207249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.207282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.207572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.207604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.207818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.207850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.208084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.208115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.208397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.208436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.208706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.208723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.208918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.208934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.209184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.209201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.209435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.209453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.209633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.209650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.209827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.209843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.210079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.210099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.210351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.210368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.210503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.210537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.210815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.210847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.211067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.211099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.211381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.211414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.211639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.211671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.211931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.211963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.212246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.212278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.212585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.212617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.212893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.212925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.213152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.213168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.213453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.673 [2024-07-15 08:03:48.213470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.673 qpair failed and we were unable to recover it. 00:28:03.673 [2024-07-15 08:03:48.213635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.213652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.213910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.213943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.214264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.214298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.214538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.214570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.214781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.214813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.215062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.215079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.215303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.215321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.215481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.215497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.215595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.215609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.215806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.215823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.215973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.215991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.216160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.216177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.216293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.216309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.216405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.216421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.216658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.216678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.216886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.216903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.217094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.217111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.217297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.217314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.217496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.217529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.217826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.217858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.218113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.218144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.218362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.218405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.218655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.218672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.218920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.218952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.219220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.219266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.219534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.219566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.219891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.219924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.220119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.220152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.220396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.220430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.220684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.220701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.220807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.220822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.221001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.221033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.221294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.221327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.221531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.221563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.221772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.221789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.221972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.221988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.222155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.222172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.222420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.222438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.222555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.222587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.222883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.222915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.223152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.674 [2024-07-15 08:03:48.223184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.674 qpair failed and we were unable to recover it. 00:28:03.674 [2024-07-15 08:03:48.223381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.223415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.223674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.223691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.223865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.223882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.223986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.224001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.224182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.224199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.224435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.224453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.224626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.224643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.224868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.224887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.225068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.225085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.225308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.225326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.225500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.225517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.225753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.225770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.225962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.225994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.226249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.226283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.226474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.226512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.226679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.226713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.227014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.227045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.227325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.227359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.227640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.227656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.227816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.227833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.228011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.228029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.228158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.228175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.228496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.228529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.228746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.228778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.229005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.229037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.229334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.229367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.229651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.229683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.229991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.230023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.230285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.230318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.230541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.230558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.230720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.230737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.230967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.230999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.231325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.231359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.231599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.231615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.231776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.675 [2024-07-15 08:03:48.231793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.675 qpair failed and we were unable to recover it. 00:28:03.675 [2024-07-15 08:03:48.231949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.231965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.232139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.232156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.232382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.232400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.232755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.232776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.232957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.232976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.233206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.233223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.233485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.233505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.233697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.233715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.233877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.233910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.234191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.234236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.234437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.234469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.234732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.234765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.234900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.234932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.235148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.235179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.235451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.235484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.235765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.235798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.236130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.236163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.236369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.236402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.236670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.236702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.236896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.236912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.237146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.237163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.237294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.237312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.237482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.237499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.237727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.237744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.237992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.238028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.238311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.238345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.238633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.238666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.238817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.238849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.239077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.239109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.239335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.239368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.239654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.239687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.239879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.239914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.240106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.240123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.240354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.240388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.240533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.240565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.240800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.240833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.241015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.241031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.241211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.241255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.241470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.241502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.241720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.241737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.676 qpair failed and we were unable to recover it. 00:28:03.676 [2024-07-15 08:03:48.241940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.676 [2024-07-15 08:03:48.241957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.242130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.242163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.242374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.242407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.242611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.242629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.242816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.242849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.243144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.243176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.243394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.243428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.243629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.243667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.243920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.243937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.244160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.244177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.244367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.244385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.244552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.244570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.244797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.244841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.245036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.245069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.245362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.245395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.245565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.245582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.245854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.245871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.246094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.246111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.246289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.246307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.246419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.246436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.246636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.246679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.246888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.246921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.247122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.247156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.247438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.247471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.247670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.247702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.247954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.247971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.248127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.248144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.248301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.248318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.248477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.248514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.248817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.248849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.249048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.249081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.249296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.249329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.249542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.249558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.249818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.249854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.250017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.250049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.250282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.250316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.250461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.250478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.250731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.250765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.250994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.251028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.251310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.251343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.677 qpair failed and we were unable to recover it. 00:28:03.677 [2024-07-15 08:03:48.251576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.677 [2024-07-15 08:03:48.251609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.251834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.251880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.252102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.252119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.252367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.252406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.252687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.252720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.252847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.252879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.253156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.253173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.253339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.253373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.253613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.253646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.253850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.253882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.254164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.254196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.254368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.254401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.254659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.254701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.254826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.254843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.255122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.255154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.255361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.255395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.255535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.255568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.255752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.255830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.256142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.256178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.256400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.256437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.256702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.256734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.256876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.256896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.257154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.257170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.257422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.257440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.257543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.257558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.257755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.257772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.257967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.257983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.258096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.258112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.258334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.258352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.258510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.258526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.258775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.258791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.259052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.259069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.259300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.259317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.259543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.259559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.259688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.259710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.259974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.260011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.260158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.260190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.260420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.260454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.260740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.260781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.261063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.261095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.261360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.678 [2024-07-15 08:03:48.261394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.678 qpair failed and we were unable to recover it. 00:28:03.678 [2024-07-15 08:03:48.261704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.261737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.261940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.261956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.262191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.262233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.262519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.262551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.262860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.262893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.263096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.263129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.263396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.263429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.263565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.263597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.263819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.263852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.264073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.264105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.264313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.264347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.264537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.264554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.264827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.264859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.265119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.265152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.265414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.265448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.265754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.265787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.265975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.265992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.266111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.266143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.266424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.266459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.266675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.266709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.267013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.267045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.267340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.267381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.267675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.267708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.267937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.267953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.268196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.268213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.268440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.268473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.268662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.268696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.268974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.268991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.269233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.269250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.269403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.269420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.269617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.269649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.269908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.269940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.270221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.270275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.270418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.270451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.270736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.270769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.679 [2024-07-15 08:03:48.270986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.679 [2024-07-15 08:03:48.271019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.679 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.271211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.271256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.271566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.271599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.271791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.271809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.272083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.272115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.272328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.272361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.272641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.272673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.272883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.272915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.273035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.273065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.273354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.273388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.273650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.273681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.273810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.273826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.274068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.274085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.274358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.274375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.274482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.274498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.274667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.274684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.274916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.274932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.275139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.275156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.275451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.275485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.275692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.275724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.275844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.275873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.276154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.276171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.276410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.276427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.276651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.276667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.276961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.276993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.277200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.277242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.277501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.277548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.277662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.277679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.277856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.277873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.278121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.278153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.278476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.278510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.278747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.278764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.278991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.279007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.279197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.279214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.279476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.279520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.279834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.279867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.280155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.280188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.280423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.280456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.280731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.280763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.281089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.281121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.281405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.281439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.680 qpair failed and we were unable to recover it. 00:28:03.680 [2024-07-15 08:03:48.281753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.680 [2024-07-15 08:03:48.281785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.282099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.282131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.282321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.282355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.282562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.282595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.282738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.282771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.283067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.283099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.283404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.283437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.283710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.283743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.283999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.284016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.284246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.284280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.284469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.284501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.284696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.284714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.284963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.284995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.285202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.285246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.285542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.285575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.285847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.285892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.286155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.286172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.286420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.286454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.286712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.286744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.287014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.287047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.287354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.287388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.287658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.287699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.287876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.287893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.287991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.288024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.288215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.288258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.288457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.288490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.288689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.288706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.288891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.288924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.289129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.289161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.289373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.289406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.289690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.289721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.289925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.289958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.290196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.290255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.290398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.290431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.290690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.290723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.290986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.291003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.291254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.291297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.291530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.291563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.291823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.291856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.292169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.292202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.681 qpair failed and we were unable to recover it. 00:28:03.681 [2024-07-15 08:03:48.292473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.681 [2024-07-15 08:03:48.292506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.292771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.292804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.292993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.293024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.293215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.293261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.293465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.293495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.293757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.293788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.294007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.294024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.294141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.294158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.294355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.294388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.294642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.294674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.294854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.294870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.295045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.295077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.295206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.295250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.295394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.295426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.295707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.295744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.296002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.296036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.296281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.296298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.296527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.296544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.296655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.296672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.296859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.296891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.297083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.297115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.297401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.297435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.297744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.297776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.298078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.298111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.298410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.298443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.298654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.298686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.298937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.298954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.299106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.299124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.299356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.299374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.299599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.299616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.299890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.299922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.300239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.300273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.300562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.300595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.300791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.300823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.301080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.301097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.301266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.301300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.301618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.301661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.301961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.301993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.302254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.302288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.302502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.302535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.302812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.682 [2024-07-15 08:03:48.302845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.682 qpair failed and we were unable to recover it. 00:28:03.682 [2024-07-15 08:03:48.303093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.303110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.303286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.303304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.303542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.303574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.303875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.303915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.304167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.304199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.304496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.304528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.304672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.304704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.304916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.304949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.305219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.305242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.305392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.305409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.305658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.305675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.305843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.305875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.306005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.306037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.306294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.306328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.306540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.306578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.306859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.306900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.307129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.307146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.307304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.307322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.307514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.307530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.307712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.307729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.307982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.308014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.308214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.308257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.308543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.308575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.308782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.308814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.308947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.308980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.309245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.309278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.309559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.309591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.309816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.309848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.310123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.310140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.310232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.310248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.310470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.310488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.310741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.310774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.311052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.311084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.311291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.311325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.311631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.311664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.311919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.311951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.312177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.312209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.312552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.312585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.312868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.312900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.313051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.683 [2024-07-15 08:03:48.313083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.683 qpair failed and we were unable to recover it. 00:28:03.683 [2024-07-15 08:03:48.313279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.313312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.313573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.313622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.313791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.313808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.314062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.314095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.314245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.314279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.314544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.314577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.314709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.314727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.314967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.314985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.315188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.315204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.315443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.315461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.315704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.315721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.315973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.315990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.316261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.316279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.316559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.316576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.316812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.316848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.317117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.317150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.317434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.317468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.317782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.317814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.318125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.318158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.318424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.318469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.318762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.318795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.319013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.319031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.319282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.319300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.319473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.319490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.319731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.319749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.319997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.320014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.320196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.320214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.320334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.320353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.320541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.320558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.320742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.320775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.320981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.321014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.321248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.321282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.321577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.321609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.321823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.321840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.322083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.684 [2024-07-15 08:03:48.322101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.684 qpair failed and we were unable to recover it. 00:28:03.684 [2024-07-15 08:03:48.322386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.322421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.322708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.322743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.322887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.322919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.323259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.323293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.323500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.323532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.323684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.323716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.323867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.323884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.324062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.324085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.324243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.324261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.324433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.324465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.324679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.324711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.324915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.324948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.325235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.325253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.325478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.325495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.325600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.325631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.325836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.325869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.326145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.326178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.326478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.326512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.326786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.326819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.327042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.327061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.327288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.327305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.327506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.327523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.327709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.327741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.328071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.328103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.328377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.328396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.328625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.328642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.328798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.328816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.329089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.329122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.329363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.329396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.329607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.329639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.329849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.329881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.330091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.330124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.330324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.330342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.330603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.330636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.330793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.330812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.331076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.331093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.331369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.331403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.331596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.331628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.331773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.331806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.331949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.685 [2024-07-15 08:03:48.331967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.685 qpair failed and we were unable to recover it. 00:28:03.685 [2024-07-15 08:03:48.332128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.332146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.332315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.332332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.332588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.332605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.332864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.332897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.333110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.333142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.333435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.333467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.333666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.333699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.333974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.334008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.334263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.334281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.334522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.334541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.334710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.334727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.334915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.334948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.335170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.335203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.335442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.335477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.335763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.335795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.335981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.336013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.336300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.336336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.336456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.336488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.336721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.336755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.337043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.337061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.337282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.337299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.337530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.337547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.337775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.337792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.338079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.338111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.338387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.338420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.338647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.338680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.338829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.338862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.339069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.339101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.339327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.339361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.339632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.339665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.339941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.339958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.340063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.340079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.340275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.340309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.340440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.340473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.340707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.340739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.341022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.341060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.341370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.341405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.341687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.341720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.341982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.342015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.342314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.342331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.686 [2024-07-15 08:03:48.342455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.686 [2024-07-15 08:03:48.342472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.686 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.342738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.342770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.342972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.343004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.343262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.343297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.343611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.343643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.343935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.343967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.344266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.344283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.344576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.344609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.344815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.344848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.345142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.345174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.345377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.345412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.345676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.345709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.346008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.346040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.346270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.346305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.346592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.346624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.346843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.346876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.347150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.347182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.347333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.347367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.347519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.347551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.347807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.347839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.348042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.348076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.348337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.348372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.348576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.348614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.348847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.348880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.349105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.349121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.349278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.349296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.349576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.349609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.349929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.349962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.350177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.350194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.350355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.350372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.350528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.350546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.350774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.350791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.350971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.350989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.351210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.351235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.351348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.351363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.351599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.351631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.351897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.351976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.352206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.352265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.352593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.352628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.352852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.352885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.353127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.687 [2024-07-15 08:03:48.353160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:03.687 qpair failed and we were unable to recover it. 00:28:03.687 [2024-07-15 08:03:48.353388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.353421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.353657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.353690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.353897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.353930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.354139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.354176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.354425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.354458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.354680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.354713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.354973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.355006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.355160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.355192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.355431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.355466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.355673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.355706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.355897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.355914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.356041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.356074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.356292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.356327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.356542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.356575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.356785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.356817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.357030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.357064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.357277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.357311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.357435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.357464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.357666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.357698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.357842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.357874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.358019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.358052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.358309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.358327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.358534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.358566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.358726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.358760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.358894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.358927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.359165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.359197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.359359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.359393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.359682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.359716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.359962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.359981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.360108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.360141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.360399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.360433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.360694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.360727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.360930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.360947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.361104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.361123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.361380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.688 [2024-07-15 08:03:48.361413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.688 qpair failed and we were unable to recover it. 00:28:03.688 [2024-07-15 08:03:48.361685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.361718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.361984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.362016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.362221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.362245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.362356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.362373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.362648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.362680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.362964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.362996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.363223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.363275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.363466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.363498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.363623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.363655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.363866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.363897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.364103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.364135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.364438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.364472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.364660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.364692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.364821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.364853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.365035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.365054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.365260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.365293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.365497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.365529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.365745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.365777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.365977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.366009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.366131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.366162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.366365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.366382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.366550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.366567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.366748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.366781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.367002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.367034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.367264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.367298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.367425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.367458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.367669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.367699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.367840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.367872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.368117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.368134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.368219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.368244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.368483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.368515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.368700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.368732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.368962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.368994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.369182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.369198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.369362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.369379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.369559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.369590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.369865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.369898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.370049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.370081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.370212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.370236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.370415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.370432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.689 [2024-07-15 08:03:48.370685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.689 [2024-07-15 08:03:48.370717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.689 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.370906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.370937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.371125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.371158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.371299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.371333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.371524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.371555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.371742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.371774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.371909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.371925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.372014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.372029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.372116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.372131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.372230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.372246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.372439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.372456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.372615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.372631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.372719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.372734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.373048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.373080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.373245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.373279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.373424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.373466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.373694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.373726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.373884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.373916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.374173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.374206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.374355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.374388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.374647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.374679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.374874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.374907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.375163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.375195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.375453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.375469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.375656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.375673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.375769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.375784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.375876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.375890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.376088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.376120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.376247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.376282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.376494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.376525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.376649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.376681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.376883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.376899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.377127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.377159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.377287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.377321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.377581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.377613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.377754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.377787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.377928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.377960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.378079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.378111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.378380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.378398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.378581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.378598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.378780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.378796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.690 qpair failed and we were unable to recover it. 00:28:03.690 [2024-07-15 08:03:48.378999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.690 [2024-07-15 08:03:48.379032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.379288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.379327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.379539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.379571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.379850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.379889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.380108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.380124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.380364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.380382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.380486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.380502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.380656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.380702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.380960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.380993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.381184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.381231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.381385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.381402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.381534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.381567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.381750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.381781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.382044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.382061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.382185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.382218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.382380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.382413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.382631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.382663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.382863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.382895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.383031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.383063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.383199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.383255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.383458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.383490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.383677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.383709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.383911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.383943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.384139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.384171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.384475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.384511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.384714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.384746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.385024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.385056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.385347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.385364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.385593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.385610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.385764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.385780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.385873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.385888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.386000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.386016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.386183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.386234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.386438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.386470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.386691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.386723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.386985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.387001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.387154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.387170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.387386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.387403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.387624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.387641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.387883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.387899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.388176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.388208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.691 qpair failed and we were unable to recover it. 00:28:03.691 [2024-07-15 08:03:48.388429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.691 [2024-07-15 08:03:48.388461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.388737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.388774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.389055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.389087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.389364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.389381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.389664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.389695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.389920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.389952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.390155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.390188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.390468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.390485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.390713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.390728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.390894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.390910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.391159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.391191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.391414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.391446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.391643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.391675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.391847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.391878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.392150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.392182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.392419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.392436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.392608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.392624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.392847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.392863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.393056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.393073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.393296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.393313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.393489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.393506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.393676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.393708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.393931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.393963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.394265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.394282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.394467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.394498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.394694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.394727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.395051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.395084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.395295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.395328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.395635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.395672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.395894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.395926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.396188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.396220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.396370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.396387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.396584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.396601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.396723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.396739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.692 [2024-07-15 08:03:48.396855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.692 [2024-07-15 08:03:48.396871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.692 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.397097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.397130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.397278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.397312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.397569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.397601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.397791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.397823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.398025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.398042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.398207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.398262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.398545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.398578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.398779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.398811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.399113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.399145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.399340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.399356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.399456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.399470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.399567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.399581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.399735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.399751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.399992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.400008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.400153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.400186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.400403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.400435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.400551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.400583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.400800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.400832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.401058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.401089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.401412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.401445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.401687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.401718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.401998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.402031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.402317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.402352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.402639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.402670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.402900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.402931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.403068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.403085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.403242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.403258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.403425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.403442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.403570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.403602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.403898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.403930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.404211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.404268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.404548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.404565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.404806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.404823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.405064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.405080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.405186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.405205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.405451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.405484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.405738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.405770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.406009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.406040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.406264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.973 [2024-07-15 08:03:48.406297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.973 qpair failed and we were unable to recover it. 00:28:03.973 [2024-07-15 08:03:48.406554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.406587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.406742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.406774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.407001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.407034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.407338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.407355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.407550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.407566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.407735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.407752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.407987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.408018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.408294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.408328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.408581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.408613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.408808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.408840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.409046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.409077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.409268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.409285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.409558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.409590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.409780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.409812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.409999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.410031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.410312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.410345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.410571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.410603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.410882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.410914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.411180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.411219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.411454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.411471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.411695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.411712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.411984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.412001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.412248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.412268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.412440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.412456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.412705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.412721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.412896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.412928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.413208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.413229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.413471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.413487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.413653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.413669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.413918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.413934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.414185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.414203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.414396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.414414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.414656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.414674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.414941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.414973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.415242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.415274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.415569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.415586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.415744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.415761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.416029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.416045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.416245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.416263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.416496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.416513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.416704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.974 [2024-07-15 08:03:48.416721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.974 qpair failed and we were unable to recover it. 00:28:03.974 [2024-07-15 08:03:48.416847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.416880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.417141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.417173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.417496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.417530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.417822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.417855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.418092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.418124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.418268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.418303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.418586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.418605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.418832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.418849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.419022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.419039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.419282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.419316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.419511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.419543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.419821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.419861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.420028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.420045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.420200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.420218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.420479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.420512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.420737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.420768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.421052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.421085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.421339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.421357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.421460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.421475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.421717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.421749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.422032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.422065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.422285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.422318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.422603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.422646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.422929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.422962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.423239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.423257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.423501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.423518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.423774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.423806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.424002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.424019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.424274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.424307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.424498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.424530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.424716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.424748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.424936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.424967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.425165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.425197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.425339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.425356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.425606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.425639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.425922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.425954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.426214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.426247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.426497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.426528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.426800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.426832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.427084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.427100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.427275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.975 [2024-07-15 08:03:48.427293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.975 qpair failed and we were unable to recover it. 00:28:03.975 [2024-07-15 08:03:48.427464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.427481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.427713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.427745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.428020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.428052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.428340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.428374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.428665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.428697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.428978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.429011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.429307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.429341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.429618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.429634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.429809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.429825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.430076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.430093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.430257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.430290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.430600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.430632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.430919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.430951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.431158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.431201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.431461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.431478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.431634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.431651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.431841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.431874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.432072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.432104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.432411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.432445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.432646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.432678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.432890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.432922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.433175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.433192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.433403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.433423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.433677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.433700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.433820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.433852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.434152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.434185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.434402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.434419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.434668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.434685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.434780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.434797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.435045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.435062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.435250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.435268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.435443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.435460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.435620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.435637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.435814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.435831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.435944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.435976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.436171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.436204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.436534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.436567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.436852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.436885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.437168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.437200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.437473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.437507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.976 [2024-07-15 08:03:48.437774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.976 [2024-07-15 08:03:48.437807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.976 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.438114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.438146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.438418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.438456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.438661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.438678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.438789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.438806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.439110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.439144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.439411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.439444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.439721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.439753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.440007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.440040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.440258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.440279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.440459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.440491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.440702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.440734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.440958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.440989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.441248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.441265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.441437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.441469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.441616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.441648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.441853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.441885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.442190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.442223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.442519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.442553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.442782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.442814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.443042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.443075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.443298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.443317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.443499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.443516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.443643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.443661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.443817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.443834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.444035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.444053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.444223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.444250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.444499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.444531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.444742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.444774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.445016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.445048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.445329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.445376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.445488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.445503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.445677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.445694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.445912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.445929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.446048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.977 [2024-07-15 08:03:48.446064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.977 qpair failed and we were unable to recover it. 00:28:03.977 [2024-07-15 08:03:48.446169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.446185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.446354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.446371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.446560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.446593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.446829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.446861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.447142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.447159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.447320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.447338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.447508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.447525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.447750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.447767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.447889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.447906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.448098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.448115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.448371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.448388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.448566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.448583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.448782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.448814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.449084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.449115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.449346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.449379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.449527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.449560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.449701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.449733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.450014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.450047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.450328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.450346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.450537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.450554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.450680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.450697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.450931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.450947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.451191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.451208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.451477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.451494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.451726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.451758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.452027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.452059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.452368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.452386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.452617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.452634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.452793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.452810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.452997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.453014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.453203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.453249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.453534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.453567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.453775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.453806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.454014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.454046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.454271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.454305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.454519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.454552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.454748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.454780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.454987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.455019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.455283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.455317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.455586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.455618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.978 qpair failed and we were unable to recover it. 00:28:03.978 [2024-07-15 08:03:48.455879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.978 [2024-07-15 08:03:48.455912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.456163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.456180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.456371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.456401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.456629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.456646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.456819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.456837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.457013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.457045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.457254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.457288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.457432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.457463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.457793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.457825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.458102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.458144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.458370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.458388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.458544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.458562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.458790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.458807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.459037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.459069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.459333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.459367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.459505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.459537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.459740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.459757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.460005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.460043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.460274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.460308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.460591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.460608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.460762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.460779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.461026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.461042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.461289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.461306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.461565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.461601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.461804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.461836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.462126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.462159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.462449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.462467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.462679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.462697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.462868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.462885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.463128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.463160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.463414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.463448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.463710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.463727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.463968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.463986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.464153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.464170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.464330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.464347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.464545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.464577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.464730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.464762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.464895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.464927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.465185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.465201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.465416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.465448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.465736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.465769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.979 qpair failed and we were unable to recover it. 00:28:03.979 [2024-07-15 08:03:48.466057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.979 [2024-07-15 08:03:48.466090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.466304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.466321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.466483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.466521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.466714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.466745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.467053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.467088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.467202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.467219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.467389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.467406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.467595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.467627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.467848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.467880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.468161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.468194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.468496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.468528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.468717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.468749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.468882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.468913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.469209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.469251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.469471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.469505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.469652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.469669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.469846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.469862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.470114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.470130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.470242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.470259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.470414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.470431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.470583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.470600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.470913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.470944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.471090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.471122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.471385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.471427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.471574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.471606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.471806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.471840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.471986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.472003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.472219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.472276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.472407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.472437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.472656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.472698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.472905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.472938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.473196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.473242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.473443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.473475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.473701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.473734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.473997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.474030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.474249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.474282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.474488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.474505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.474611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.474626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.474879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.474896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.475139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.475156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.475262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.980 [2024-07-15 08:03:48.475277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.980 qpair failed and we were unable to recover it. 00:28:03.980 [2024-07-15 08:03:48.475457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.475489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.475697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.475729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.475967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.475999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.476194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.476237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.476428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.476460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.476666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.476698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.476958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.476990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.477220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.477248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.477355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.477371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.477573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.477605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.477745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.477777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.477988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.478020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.478302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.478319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.478487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.478503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.478748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.478765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.479017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.479050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.479338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.479372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.479655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.479672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.479830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.479847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.480037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.480069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.480441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.480476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.480608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.480640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.480927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.480959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.481221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.481265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.481526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.481558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.481773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.481806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.482070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.482102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.482397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.482414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.482574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.482606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.482895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.482932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.483128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.483160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.483374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.483391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.483587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.483619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.483761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.483794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.484074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.484107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.981 [2024-07-15 08:03:48.484410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.981 [2024-07-15 08:03:48.484428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.981 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.484624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.484641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.484826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.484842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.485097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.485130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.485364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.485381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.485493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.485524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.485659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.485691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.485955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.485987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.486286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.486320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.486541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.486573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.486793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.486826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.487043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.487074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.487209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.487252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.487455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.487473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.487647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.487679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.487867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.487899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.488098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.488130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.488332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.488366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.488559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.488590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.488872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.488904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.489165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.489199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.489410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.489447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.489710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.489743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.489955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.489987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.490251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.490285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.490603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.490636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.490825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.490856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.491050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.491083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.491289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.491307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.491572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.491604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.491828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.491860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.492145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.492178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.492394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.492427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.492645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.492662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.492843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.492875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.493086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.493103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.493297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.493332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.493534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.493566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.493789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.493822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.493959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.493992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.494138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.494154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.982 [2024-07-15 08:03:48.494377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.982 [2024-07-15 08:03:48.494395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.982 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.494498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.494513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.494767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.494783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.494958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.494975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.495078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.495093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.495207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.495230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.495388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.495405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.495560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.495577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.495664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.495679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.495885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc9000 is same with the state(5) to be set 00:28:03.983 [2024-07-15 08:03:48.496383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.496461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.496773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.496809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.497082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.497115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.497321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.497355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.497622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.497654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.497770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.497790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.498071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.498088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.498190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.498206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.498304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.498320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.498571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.498588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.498755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.498772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.499004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.499036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.499326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.499359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.499644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.499661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.499851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.499868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.500120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.500152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.500344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.500378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.500582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.500614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.500804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.500836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.501096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.501128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.501342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.501359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.501521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.501538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.501755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.501771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.501884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.501900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.502130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.502147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.502426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.502447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.502613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.502629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.502823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.502854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.503138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.503183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.503384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.503402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.503651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.503683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.983 [2024-07-15 08:03:48.503942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.983 [2024-07-15 08:03:48.503973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.983 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.504248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.504266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.504516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.504534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.504692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.504709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.504958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.504990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.505202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.505243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.505531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.505564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.505783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.505815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.505967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.506000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.506140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.506171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.506402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.506436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.506639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.506655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.506834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.506866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.507125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.507157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.507367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.507400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.507612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.507628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.507785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.507801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.508081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.508114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.508384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.508401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.508569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.508601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.508808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.508840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.509043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.509081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.509367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.509385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.509476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.509491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.509692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.509724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.509945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.509976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.510207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.510251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.510526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.510543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.510712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.510729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.510912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.510944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.511134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.511166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.511493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.511526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.511815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.511847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.512138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.512169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.512459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.512492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.512692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.512726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.512936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.512967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.513253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.513287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.513549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.513565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.513722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.513739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.513964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.513980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.984 [2024-07-15 08:03:48.514206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.984 [2024-07-15 08:03:48.514222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.984 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.514409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.514427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.514541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.514557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.514711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.514728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.514820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.514835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.515089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.515105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.515213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.515237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.515479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.515496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.515622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.515640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.515746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.515763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.515994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.516011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.516212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.516240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.516415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.516432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.516654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.516671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.516781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.516797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.516959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.516976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.517253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.517270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.517389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.517421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.517723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.517755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.518039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.518071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.518218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.518242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.518396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.518415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.518572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.518612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.518931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.518964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.519157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.519188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.519477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.519495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.519679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.519696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.519931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.519962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.520157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.520189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.520410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.520442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.520700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.520731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.520876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.520908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.521106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.521138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.521345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.521363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.521631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.521664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.521956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.521989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.522179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.522211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.522389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.522421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.985 [2024-07-15 08:03:48.522685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.985 [2024-07-15 08:03:48.522702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.985 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.522931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.522948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.523176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.523193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.523419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.523436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.523633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.523650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.523911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.523942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.524205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.524250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.524470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.524502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.524702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.524719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.524966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.524983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.525149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.525166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.525415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.525434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.525561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.525593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.525783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.525814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.526058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.526090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.526393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.526410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.526501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.526516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.526620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.526635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.526727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.526742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.526917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.526933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.527118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.527149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.527414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.527447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.527709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.527741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.527966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.527998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.528196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.528213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.528357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.528375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.528465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.528480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.528662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.528695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.528908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.528941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.529141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.529173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.529480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.529514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.529724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.529740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.529996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.530028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.530191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.530223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.530510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.530542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.530740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.530757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.530983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.531000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.531099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.531114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.531277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.986 [2024-07-15 08:03:48.531295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.986 qpair failed and we were unable to recover it. 00:28:03.986 [2024-07-15 08:03:48.531464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.531496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.531774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.531805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.532094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.532126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.532423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.532456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.532639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.532656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.532750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.532765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.532869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.532885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.532997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.533014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.533188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.533205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.533503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.533521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.533769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.533785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.534011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.534027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.534193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.534212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.534472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.534504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.534632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.534664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.534929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.534961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.535170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.535201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.535495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.535527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.535776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.535792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.535960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.535977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.536172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.536205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.536412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.536445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.536675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.536708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.536841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.536873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.537095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.537127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.537335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.537352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.537487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.537503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.537691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.537723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.537939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.537971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.538241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.538275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.538468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.538484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.538715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.538746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.538937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.538969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.539237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.539270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.539468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.539484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.539656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.539687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.539947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.539980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.540263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.540296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.540600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.540632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.540892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.540925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.541084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.987 [2024-07-15 08:03:48.541117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.987 qpair failed and we were unable to recover it. 00:28:03.987 [2024-07-15 08:03:48.541381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.541399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.541630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.541663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.541854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.541886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.542026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.542058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.542209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.542257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.542506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.542542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.542811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.542843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.543116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.543149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.543417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.543450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.543670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.543687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.543939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.543971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.544240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.544274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.544546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.544566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.544681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.544698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.544889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.544920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.545192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.545251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.545377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.545411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.545644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.545660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.545943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.545960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.546126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.546143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.546302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.546320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.546547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.546564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.546741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.546757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.546950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.546983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.547144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.547176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.547318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.547335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.547539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.547555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.547815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.547848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.548067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.548100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.548330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.548346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.548517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.548548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.548807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.548839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.549059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.549091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.549313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.549345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.549560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.549592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.549895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.549927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.550141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.550184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.550275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.550291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.550533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.550550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.550639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.550658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.550900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.988 [2024-07-15 08:03:48.550932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.988 qpair failed and we were unable to recover it. 00:28:03.988 [2024-07-15 08:03:48.551141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.551173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.551448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.551466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.551627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.551644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.551806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.551838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.552066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.552098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.552332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.552349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.552517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.552534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.552726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.552742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.552986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.553002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.553176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.553207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.553435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.553468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.553666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.553699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.553893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.553910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.554148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.554180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.554452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.554486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.554688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.554720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.554858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.554890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.555101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.555133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.555407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.555425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.555584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.555601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.555774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.555790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.555994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.556026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.556243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.556276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.556539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.556573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.556825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.556842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.557012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.557029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.557202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.557245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.557405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.557438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.557706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.557750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.557866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.557883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.558050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.558081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.558283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.558315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.558573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.558590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.558823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.558854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.559124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.559157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.559463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.559496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.559704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.559736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.559954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.559997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.560292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.560338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.560627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.560665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.989 [2024-07-15 08:03:48.560950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.989 [2024-07-15 08:03:48.560982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.989 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.561265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.561299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.561570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.561602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.561733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.561751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.561842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.561857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.562085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.562117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.562312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.562345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.562551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.562582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.562853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.562886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.563191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.563223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.563437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.563454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.563563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.563578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.563802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.563834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.564045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.564077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.564371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.564405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.564636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.564653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.564852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.564884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.565072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.565104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.565373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.565390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.565583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.565599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.565753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.565770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.565877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.565892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.566052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.566069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.566242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.566259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.566446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.566463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.566715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.566732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.566822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.566840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.567148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.567248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.567541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.567578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.567880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.567900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.568012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.568029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.568218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.568262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.568542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.568574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.568833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.568850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.568935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.568950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.569197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.990 [2024-07-15 08:03:48.569262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.990 qpair failed and we were unable to recover it. 00:28:03.990 [2024-07-15 08:03:48.569603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.569634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.569830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.569847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.570094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.570126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.570320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.570353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.570579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.570612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.570872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.570904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.571105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.571138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.571327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.571359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.571615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.571632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.571792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.571809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.572060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.572093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.572408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.572440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.572632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.572675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.572901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.572918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.573143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.573159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.573399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.573417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.573592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.573609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.573833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.573849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.574101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.574119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.574324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.574358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.574643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.574675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.574950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.574967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.575162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.575179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.575371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.575389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.575592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.575625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.575890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.575923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.576238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.576271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.576499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.576515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.576676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.576709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.576872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.576904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.577098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.577130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.577403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.577442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.577585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.577617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.577825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.577857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.578131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.578163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.578487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.578521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.578785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.578802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.578976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.578993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.579249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.579283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.579474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.579492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.579739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.991 [2024-07-15 08:03:48.579772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.991 qpair failed and we were unable to recover it. 00:28:03.991 [2024-07-15 08:03:48.580027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.580059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.580320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.580353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.580580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.580612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.580879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.580896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.581074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.581091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.581270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.581303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.581454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.581487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.581679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.581711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.581975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.582009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.582323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.582356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.582584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.582617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.582950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.582982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.583276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.583310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.583588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.583629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.583769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.583802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.584111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.584144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.584361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.584394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.584662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.584699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.584907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.584924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.585094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.585127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.585322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.585354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.585573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.585606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.585890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.585922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.586216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.586260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.586525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.586542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.586734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.586766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.587056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.587089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.587376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.587410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.587697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.587730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.588033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.588066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.588272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.588305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.588521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.588538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.588783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.588800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.589082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.589114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.589427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.589462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.589740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.589773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.589975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.590007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.590217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.590259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.590455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.590472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.590725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.590757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.992 [2024-07-15 08:03:48.591016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.992 [2024-07-15 08:03:48.591048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.992 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.591301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.591335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.591579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.591596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.591852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.591895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.592087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.592120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.592395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.592429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.592709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.592741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.593041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.593073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.593290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.593323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.593537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.593569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.593744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.593760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.594055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.594072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.594248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.594265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.594522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.594555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.594688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.594721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.594931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.594963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.595163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.595195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.595353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.595386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.595590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.595628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.595913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.595945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.596141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.596173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.596421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.596454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.596677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.596708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.596916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.596933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.597184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.597215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.597516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.597557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.597826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.597858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.598121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.598153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.598460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.598494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.598801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.598832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.599128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.599160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.599444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.599476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.599772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.599805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.600086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.600118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.600409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.600442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.600631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.600663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.600933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.600965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.601162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.601194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.601364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.601398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.601581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.601615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.993 [2024-07-15 08:03:48.601842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.993 [2024-07-15 08:03:48.601874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.993 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.602181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.602213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.602522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.602555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.602762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.602778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.602978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.602994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.603222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.603272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.603460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.603477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.603661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.603694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.603913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.603944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.604267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.604301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.604549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.604582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.604871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.604902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.605090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.605122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.605334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.605368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.605613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.605630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.605904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.605921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.606091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.606108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.606294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.606311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.606566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.606608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.606880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.606912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.607110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.607142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.607415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.607449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.607602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.607634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.607837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.607854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.608112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.608145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.608337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.608370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.608651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.608684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.608884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.608902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.609061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.609078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.609176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.609191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.609444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.609462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.609631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.609649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.609900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.609933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.610147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.610179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.610406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.610449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.610640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.994 [2024-07-15 08:03:48.610657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.994 qpair failed and we were unable to recover it. 00:28:03.994 [2024-07-15 08:03:48.610826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.610858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.611141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.611173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.611443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.611476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.611785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.611818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.612084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.612116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.612399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.612433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.612745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.612777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.613029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.613062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.613322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.613355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.613621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.613637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.613882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.613919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.614180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.614214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.614499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.614533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.614821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.614853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.615059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.615091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.615356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.615401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.615579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.615596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.615785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.615817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.616089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.616122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.616315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.616348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.616483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.616500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.616695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.616711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.616958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.616995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.617277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.617310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.617606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.617638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.617921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.617953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.618174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.618207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.618442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.618475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.618753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.618788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.619073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.619089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.619273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.619290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.619547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.619578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.619712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.619744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.620056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.620088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.620376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.620410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.620719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.620752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.621031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.621063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.621287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.621321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.621549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.621581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.621889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.995 [2024-07-15 08:03:48.621921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.995 qpair failed and we were unable to recover it. 00:28:03.995 [2024-07-15 08:03:48.622191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.622234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.622510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.622542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.622831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.622863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.623157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.623189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.623472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.623504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.623793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.623825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.624059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.624091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.624361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.624395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.624703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.624736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.625008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.625040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.625171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.625203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.625482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.625516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.625729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.625761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.625950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.625982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.626203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.626244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.626524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.626565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.626850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.626883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.627068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.627100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.627336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.627369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.627559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.627576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.627848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.627879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.628079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.628111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.628374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.628408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.628618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.628634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.628812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.628829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.629011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.629044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.629306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.629354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.629582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.629599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.629717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.629733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.629904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.629937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.630166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.630198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.630432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.630466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.630680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.630713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.630971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.631002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.631308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.631341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.631506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.631538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.631814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.631860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.632089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.632106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.632197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.632214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.632448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.632465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.996 [2024-07-15 08:03:48.632563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.996 [2024-07-15 08:03:48.632578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.996 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.632835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.632881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.633069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.633100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.633286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.633321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.633529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.633546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.633713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.633745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.633936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.633969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.634255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.634288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.634479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.634511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.634700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.634732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.635008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.635024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.635179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.635211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.635534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.635566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.635846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.635863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.635986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.636001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.636244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.636277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.636478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.636510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.636722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.636755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.637037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.637068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.637327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.637362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.637659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.637690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.637986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.638018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.638301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.638336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.638628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.638660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.638930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.638962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.639257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.639289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.639500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.639532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.639794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.639826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.640039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.640071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.640360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.640392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.640550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.640582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.640776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.640808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.641002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.641019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.641266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.641284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.641535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.641567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.641827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.641860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.642168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.642185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.642365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.642382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.642611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.642644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.642930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.642962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.643100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.643132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.997 [2024-07-15 08:03:48.643440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.997 [2024-07-15 08:03:48.643474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.997 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.643740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.643773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.643960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.643978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.644169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.644186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.644387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.644421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.644632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.644664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.644869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.644908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.645058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.645074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.645327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.645344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.645465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.645482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.645685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.645717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.645907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.645940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.646236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.646271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.646478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.646509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.646702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.646734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.646938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.646956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.647244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.647277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.647560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.647592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.647795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.647812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.648001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.648018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.648135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.648150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.648431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.648464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.648676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.648709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.648984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.649016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.649206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.649250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.649508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.649546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.649821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.649853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.650137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.650169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.650468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.650501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.650691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.650723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.650930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.650963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.651251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.651284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.651510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.651541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.651761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.651793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.652095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.652112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.652288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.652305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.652553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.652570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.652764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.652781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.652951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.652968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.653148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.653181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.998 qpair failed and we were unable to recover it. 00:28:03.998 [2024-07-15 08:03:48.653413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.998 [2024-07-15 08:03:48.653446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.653651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.653668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.653845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.653876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.654074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.654106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.654409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.654443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.654727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.654759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.655043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.655076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.655375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.655408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.655689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.655721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.656012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.656044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.656246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.656280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.656488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.656520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.656802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.656838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.657073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.657107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.657336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.657371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.657650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.657682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.657877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.657909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.658120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.658151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.658356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.658390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.658700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.658732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.659013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.659030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.659328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.659362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.659497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.659530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.659816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.659848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.660121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.660162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.660460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.660495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.660632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.660670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.660880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.660896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.661009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.661027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.661199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.661215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.661465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.661499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.661788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.661832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.662021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.662038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.662204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.662221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.662383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.662399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.662654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.662686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:03.999 [2024-07-15 08:03:48.662991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.999 [2024-07-15 08:03:48.663024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:03.999 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.663240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.663274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.663484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.663516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.663735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.663752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.663917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.663934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.664039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.664055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.664253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.664271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.664519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.664537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.664785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.664821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.665014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.665046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.665315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.665348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.665628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.665661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.665955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.665987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.666296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.666329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.666581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.666614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.666899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.666931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.667122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.667154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.667346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.667385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.667602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.667634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.667783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.667815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.668096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.668128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.668386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.668420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.668578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.668611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.668889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.668905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.669157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.669195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.669398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.669431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.669624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.669656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.669877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.669910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.670098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.670130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.670391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.670426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.670709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.670742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.671025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.671042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.671268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.671301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.671566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.671611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.671769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.671786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.671943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.671960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.672148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.672180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.672414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.672448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.672662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.672694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.672984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.673016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.673208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.673259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.000 qpair failed and we were unable to recover it. 00:28:04.000 [2024-07-15 08:03:48.673470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.000 [2024-07-15 08:03:48.673502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.673802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.673819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.674061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.674078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.674366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.674400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.674557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.674589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.674794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.674826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.675113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.675154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.675355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.675388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.675612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.675644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.675958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.675991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.676258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.676291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.676424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.676453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.676656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.676674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.676778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.676796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.677021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.677038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.677212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.677242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.677502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.677519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.677670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.677692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.677973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.678005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.678290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.678324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.678632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.678665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.678935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.678967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.679273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.679306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.679508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.679541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.679732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.679764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.680025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.680057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.680367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.680400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.680600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.680633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.680889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.680921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.681128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.681161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.681435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.681468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.681666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.681699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.681906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.681938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.682198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.682240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.682529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.682562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.682841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.682873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.683084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.683116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.683337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.683371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.683659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.683691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.683961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.683993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.684201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.001 [2024-07-15 08:03:48.684245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.001 qpair failed and we were unable to recover it. 00:28:04.001 [2024-07-15 08:03:48.684555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.684589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.684894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.684926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.685117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.685149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.685410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.685449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.685643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.685675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.685955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.685971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.686232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.686249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.686419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.686436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.686681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.686698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.686947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.686980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.687206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.687250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.687469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.687502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.687787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.687819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.688097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.688114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.688357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.688375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.688544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.688561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.688740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.688772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.688988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.689020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.689329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.689362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.689660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.689693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.689984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.690001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.690169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.690185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.690352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.690369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.690476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.690493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.690591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.690606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.690828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.690861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.691048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.691080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.691313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.691346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.691656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.691688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.691954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.691987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.692247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.692280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.692559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.692591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.692905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.692937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.693203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.693253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.693541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.693573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.693848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.693880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.694170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.694209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.694490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.694524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.694752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.694785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.695018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.695035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.695235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.002 [2024-07-15 08:03:48.695253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.002 qpair failed and we were unable to recover it. 00:28:04.002 [2024-07-15 08:03:48.695423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.695439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.695686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.695703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.695881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.695897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.696031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.696067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.696267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.696299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.696553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.696585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.696811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.696843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.697061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.697077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.697305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.697337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.697545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.697578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.697807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.697839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.698125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.698157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.698396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.698429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.698634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.698667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.698947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.698983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.699190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.699222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.699566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.699601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.699765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.699797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.700002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.700018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.700142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.700174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.700468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.700501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.700694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.700726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.700876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.700907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.701194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.701253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.701547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.701579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.701766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.701799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.702007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.702039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.702244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.702278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.702487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.702519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.702833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.702865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.703034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.703053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.703242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.703259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.703385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.703402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.703668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.703700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.003 [2024-07-15 08:03:48.703928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.003 [2024-07-15 08:03:48.703960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.003 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.704193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.704210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.704484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.704505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.704637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.704654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.704744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.704759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.704870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.704886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.705158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.705174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.705275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.705290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.705410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.705430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.705531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.705547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.705738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.705753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.705912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.705927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.706149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.706167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.706277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.706293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.706421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.706438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.706603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.706624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.706877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.706895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.707073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.707090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.707252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.707269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.707449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.707466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.707584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.707600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.707804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.707821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.708048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.708065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.708293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.708310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.708594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.708626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.708838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.708870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.709140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.709157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.709282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.709298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.709408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.709422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.283 [2024-07-15 08:03:48.709601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.283 [2024-07-15 08:03:48.709633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.283 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.709829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.709860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.710089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.710126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.710339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.710376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.710634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.710670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.710881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.710915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.711076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.711093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.711294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.711313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.711441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.711464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.711641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.711659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.711829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.711847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.712080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.712097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.712271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.712294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.712476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.712493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.712654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.712671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.712789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.712807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.713060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.713076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.713193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.713210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.713521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.713539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.713684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.713701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.713890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.713908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.714110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.714151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.714370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.714404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.714684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.714716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.714917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.714948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.715149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.715165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.715281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.715312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.715604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.715638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.715851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.715883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.716076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.716092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.716198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.716213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.716425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.716443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.716567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.716584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.716697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.716714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.716880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.716897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.717057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.717073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.717347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.717380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.717642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.717674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.717986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.718019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.718279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.718312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.718579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.284 [2024-07-15 08:03:48.718610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.284 qpair failed and we were unable to recover it. 00:28:04.284 [2024-07-15 08:03:48.718803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.718838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.719125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.719157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.719417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.719451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.719592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.719624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.719818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.719851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.720049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.720080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.720297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.720330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.720472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.720504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.720781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.720816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.721001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.721018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.721106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.721120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.721240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.721254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.721491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.721523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.721713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.721746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.722028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.722060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.722287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.722319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.722457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.722488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.722753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.722770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.722997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.723013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.723204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.723220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.723394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.723412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.723646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.723678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.724018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.724050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.724250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.724286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.724578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.724610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.724894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.724927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.725206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.725272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.725586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.725618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.725817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.725834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.725995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.726011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.726113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.726129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.726285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.726317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.726532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.726564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.726843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.726875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.727128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.727146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.727418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.727442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.727608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.727626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.727785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.727801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.728048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.728079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.728220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.728267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.285 [2024-07-15 08:03:48.728478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.285 [2024-07-15 08:03:48.728511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.285 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.728656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.728673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.728927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.728960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.729177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.729209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.729499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.729531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.729810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.729827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.730050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.730067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.730296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.730313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.730423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.730455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.730765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.730796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.731002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.731019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.731174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.731191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.731346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.731363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.731467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.731482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.731654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.731686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.731949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.731982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.732276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.732293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.732486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.732504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.732625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.732641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.732901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.732932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.733214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.733290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.733501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.733533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.733818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.733850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.734048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.734080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.734216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.734240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.734411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.734444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.734725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.734758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.734988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.735005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.735164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.735181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.735410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.735429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.735537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.735584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.735796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.735829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.736054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.736100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.736345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.736362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.736523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.736540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.736711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.736744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.736884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.736922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.737138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.737169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.737395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.737428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.737570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.737603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.737808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.737825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.286 qpair failed and we were unable to recover it. 00:28:04.286 [2024-07-15 08:03:48.737941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.286 [2024-07-15 08:03:48.737958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.738162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.738179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.738338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.738354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.738517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.738549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.738858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.738874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.739048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.739079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.739281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.739314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.739512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.739544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.739677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.739709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.740001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.740033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.740254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.740272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.740373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.740390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.740494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.740511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.740743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.740775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.740925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.740958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.741173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.741205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.741566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.741610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.741787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.741804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.742010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.742042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.742329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.742363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.742646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.742684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.742845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.742861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.743035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.743072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.743289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.743324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.743450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.743482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.743761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.743792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.744076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.744109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.744402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.744435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.744598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.744631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.744919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.744952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.745092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.745125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.745434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.745468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.745596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.745628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.745842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.287 [2024-07-15 08:03:48.745875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.287 qpair failed and we were unable to recover it. 00:28:04.287 [2024-07-15 08:03:48.746166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.746199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.746408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.746441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.746637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.746670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.746954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.746987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.747290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.747307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.747583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.747599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.747781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.747797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.748055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.748086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.748349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.748382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.748661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.748701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.748873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.748890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.749014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.749031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.749260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.749278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.749519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.749536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.749769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.749786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.749953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.749971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.750160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.750177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.750416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.750433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.750638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.750654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.750896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.750912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.751085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.751102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.751308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.751343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.751651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.751683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.751875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.751907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.752099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.752132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.752373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.752389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.752618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.752634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.752745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.752762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.752927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.752943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.753131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.753151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.753398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.753415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.753597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.753629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.753851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.753867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.754071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.754102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.754386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.754420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.754615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.754647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.754849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.754866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.755121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.755153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.755391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.755424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.755690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.755735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.288 [2024-07-15 08:03:48.755937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.288 [2024-07-15 08:03:48.755954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.288 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.756120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.756136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.756336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.756369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.756665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.756698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.756909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.756926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.757036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.757052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.757288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.757322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.757473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.757506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.757742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.757774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.757978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.758021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.758186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.758203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.758456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.758490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.758778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.758811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.759092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.759125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.759415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.759449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.759744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.759776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.760058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.760098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.760307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.760343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.760547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.760581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.760775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.760792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.761097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.761130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.761331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.761364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.761560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.761591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.761815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.761848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.762131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.762163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.762455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.762489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.762800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.762833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.763022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.763066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.763295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.763313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.763565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.763604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.763902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.763981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.764303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.764341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.764502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.764535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.764819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.764839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.764961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.764977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.765249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.765266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.765372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.765388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.765651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.765683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.765874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.765907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.766148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.766179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.766501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.766535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.289 qpair failed and we were unable to recover it. 00:28:04.289 [2024-07-15 08:03:48.766694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.289 [2024-07-15 08:03:48.766727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.766995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.767027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.767332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.767349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.767531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.767547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.767741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.767759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.767881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.767897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.768146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.768178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.768466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.768505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.768791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.768823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.769111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.769143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.769408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.769443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.769712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.769755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.769872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.769889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.770138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.770170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.770465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.770497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.770780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.770812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.771101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.771151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.771388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.771405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.771603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.771619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.771772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.771788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.772030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.772062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.772363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.772397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.772561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.772594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.772822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.772854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.773115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.773147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.773401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.773418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.773659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.773692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.773920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.773937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.774187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.774205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.774455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.774491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.774765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.774797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.774993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.775025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.775297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.775315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.775420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.775437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.775630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.775663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.775948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.775980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.776267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.776300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.776591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.776623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.776906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.776938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.777070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.777102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.777309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.777343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.290 qpair failed and we were unable to recover it. 00:28:04.290 [2024-07-15 08:03:48.777537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.290 [2024-07-15 08:03:48.777569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.777834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.777867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.778146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.778189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.778410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.778427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.778601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.778634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.778921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.778953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.779239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.779282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.779478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.779511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.779645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.779677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.779938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.779969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.780236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.780269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.780551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.780582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.780868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.780900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.781166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.781198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.781472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.781506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.781769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.781801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.782101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.782133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.782411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.782446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.782673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.782707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.783012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.783045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.783235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.783252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.783432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.783463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.783748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.783780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.784046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.784063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.784249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.784267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.784456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.784488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.784751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.784783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.785095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.785127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.785324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.785357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.785645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.785677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.785971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.786003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.786125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.786142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.786324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.786341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.786515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.786533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.786786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.786818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.787028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.787059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.291 [2024-07-15 08:03:48.787319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.291 [2024-07-15 08:03:48.787337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.291 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.787612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.787630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.787856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.787874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.788043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.788059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.788246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.788280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.788499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.788531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.788757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.788793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.789011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.789031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.789269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.789288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.789408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.789424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.789581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.789598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.789805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.789838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.790105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.790138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.790424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.790458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.790620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.790652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.790867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.790902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.791098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.791131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.791271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.791305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.791541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.791573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.791720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.791752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.791950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.791983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.792260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.792277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.792505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.792522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.792772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.792808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.793095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.793126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.793419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.793453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.793737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.793770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.793894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.793926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.794201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.794291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.794606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.794642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.794868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.794902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.795196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.795237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.795511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.795544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.795807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.795839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.796043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.796065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.796249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.796282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.796564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.796596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.796738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.796770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.796998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.797029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.797296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.292 [2024-07-15 08:03:48.797330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.292 qpair failed and we were unable to recover it. 00:28:04.292 [2024-07-15 08:03:48.797615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.797648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.797893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.797925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.798164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.798196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.798386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.798423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.798575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.798607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.798763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.798800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.798942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.798974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.799274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.799308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.799531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.799564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.799777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.799811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.800097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.800131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.800399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.800432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.800735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.800769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.800891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.800924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.801159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.801192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.801351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.801372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.801547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.801579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.801743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.801775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.802070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.802109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.802215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.802238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.802483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.802515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.802756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.802794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.803058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.803090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.803356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.803389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.803607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.803639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.803948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.803980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.804249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.804267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.804501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.804532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.804794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.804826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.805112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.805145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.805391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.805425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.805689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.805720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.805932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.805948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.806120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.806152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.806367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.806400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.806693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.806726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.806937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.806970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.807239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.807273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.807578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.807610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.807831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.807864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.293 qpair failed and we were unable to recover it. 00:28:04.293 [2024-07-15 08:03:48.808151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.293 [2024-07-15 08:03:48.808183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.808415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.808448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.808670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.808702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.808913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.808945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.809262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.809280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.809456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.809472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.809635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.809667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.809952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.809985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.810277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.810311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.810530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.810547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.810802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.810834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.811069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.811085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.811262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.811279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.811471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.811503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.811717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.811750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.811956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.811989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.812267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.812301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.812611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.812644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.812851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.812884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.813073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.813090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.813264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.813298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.813527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.813559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.813766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.813799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.814096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.814113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.814397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.814430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.814736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.814768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.814971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.815002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.815194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.815235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.815433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.815465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.815755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.815786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.816039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.816056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.816287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.816320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.816607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.816639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.816953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.816986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.817277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.817312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.817550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.817583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.817801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.817834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.818131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.818164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.818482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.818517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.818715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.818748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.294 [2024-07-15 08:03:48.819006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.294 [2024-07-15 08:03:48.819025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.294 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.819212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.819257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.819520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.819553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.819791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.819824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.820140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.820172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.820425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.820461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.820625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.820656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.820806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.820838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.821120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.821153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.821413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.821434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.821731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.821764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.822060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.822092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.822254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.822288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.822477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.822509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.822812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.822844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.823116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.823137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.823262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.823279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.823508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.823540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.823769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.823802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.824010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.824042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.824173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.824190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.824433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.824450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.824564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.824581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.824747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.824790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.825077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.825109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.825406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.825425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.825535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.825567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.825831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.825877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.826058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.826075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.826305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.826339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.826490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.826523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.826669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.826701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.826944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.826977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.827210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.827262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.827412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.827445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.827640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.827673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.827817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.827850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.828146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.828178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.828407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.828441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.828718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.828751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.828956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.828989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.295 [2024-07-15 08:03:48.829253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.295 [2024-07-15 08:03:48.829287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.295 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.829550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.829583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.829879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.829911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.830127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.830160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.830408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.830427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.830603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.830635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.830847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.830880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.831006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.831037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.831296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.831330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.831537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.831576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.831728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.831761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.832020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.832052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.832268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.832285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.832462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.832478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.832729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.832761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.832967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.832984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.833168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.833199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.833517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.833549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.833749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.833781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.833995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.834027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.834217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.834263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.834537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.834577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.834821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.834853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.835104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.835138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.835339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.835357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.835585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.835604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.835829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.835846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.836116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.836133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.836381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.836399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.836598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.836615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.836892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.836909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.837067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.837084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.837205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.837222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.837425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.837443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.837715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.837746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.837957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.837989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.296 qpair failed and we were unable to recover it. 00:28:04.296 [2024-07-15 08:03:48.838185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.296 [2024-07-15 08:03:48.838223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.838505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.838538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.838756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.838789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.839047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.839079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.839402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.839437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.839690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.839723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.839921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.839953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.840161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.840177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.840431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.840448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.840605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.840621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.840779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.840796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.841066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.841084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.841209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.841233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.841397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.841433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.841712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.841744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.841890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.841922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.842066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.842097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.842307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.842349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.842523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.842539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.842713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.842745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.843024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.843056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.843258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.843275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.843404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.843422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.843666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.843684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.843866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.843883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.844109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.844126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.844405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.844422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.844652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.844670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.844833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.844850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.845100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.845119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.845298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.845315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.845472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.845488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.845684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.845702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.845882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.845917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.846122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.846157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.846425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.846458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.846720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.846754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.846964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.846996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.847209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.847238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.847468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.847486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.847713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.297 [2024-07-15 08:03:48.847745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.297 qpair failed and we were unable to recover it. 00:28:04.297 [2024-07-15 08:03:48.848033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.848070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.848333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.848367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.848581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.848613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.848882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.848913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.849176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.849195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.849439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.849456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.849615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.849631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.849817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.849834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.849954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.849971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.850201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.850247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.850459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.850492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.850654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.850687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.850885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.850918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.851157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.851190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.851461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.851478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.851736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.851769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.852045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.852076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.852196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.852212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.852399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.852418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.852649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.852667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.852858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.852876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.853077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.853095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.853263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.853281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.853499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.853516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.853619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.853636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.853867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.853899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.854112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.854144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.854412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.854450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.854673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.854705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.854833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.854865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.855141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.855156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.855356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.855388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.855522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.855557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.855827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.855859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.856055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.856087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.856353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.856371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.856554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.856588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.856786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.856822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.857097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.857115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.857271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.857289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.857467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.298 [2024-07-15 08:03:48.857485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.298 qpair failed and we were unable to recover it. 00:28:04.298 [2024-07-15 08:03:48.857668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.857684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.857941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.857974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.858175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.858208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.858357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.858390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.858617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.858649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.858957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.858989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.859188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.859206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.859384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.859418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.859640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.859672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.859793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.859826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.860029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.860062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.860276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.860314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.860559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.860594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.860858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.860905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.861105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.861123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.861302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.861321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.861498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.861516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.861682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.861699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.861958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.861976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.862152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.862185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.862464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.862497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.862718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.862751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.863021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.863054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.863270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.863287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.863460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.863478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.863662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.863679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.863930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.863963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.864267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.864307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.864528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.864561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.864816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.864834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.865025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.865041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.865202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.865252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.865449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.865481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.865632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.865664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.865898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.865930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.866195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.866239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.866382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.866415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.866559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.866592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.866883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.866918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.867108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.299 [2024-07-15 08:03:48.867128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.299 qpair failed and we were unable to recover it. 00:28:04.299 [2024-07-15 08:03:48.867382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.867402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.867578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.867611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.867925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.867959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.868147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.868166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.868396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.868413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.868605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.868638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.868789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.868821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.869042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.869073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.869269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.869287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.869468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.869486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.869656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.869704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.869859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.869892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.870185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.870218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.870381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.870397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.870672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.870710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.870923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.870956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.871161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.871194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.871440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.871475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.871634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.871666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.871869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.871901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.872138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.872170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.872418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.872437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.872681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.872697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.872850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.872866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.873089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.873121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.873322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.873357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.873574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.873606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.873740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.873773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.874047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.874126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.874436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.874477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.874807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.874842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.875075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.875108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.875344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.875377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.875613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.875647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.875793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.300 [2024-07-15 08:03:48.875827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.300 qpair failed and we were unable to recover it. 00:28:04.300 [2024-07-15 08:03:48.876025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.876058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.876366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.876398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.876531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.876557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.876700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.876733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.877023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.877056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.877253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.877288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.877550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.877570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.877799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.877819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.877993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.878011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.878115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.878132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.878284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.878301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.878582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.878613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.878843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.878876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.879141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.879175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.879391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.879423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.879714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.879732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.880030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.880063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.880382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.880417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.880567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.880600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.880751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.880783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.881045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.881079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.881280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.881298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.881415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.881432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.881633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.881667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.881809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.881842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.882044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.882077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.882302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.882335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.882547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.882581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.882843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.882875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.883160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.883193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.883367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.883402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.883676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.883694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.883864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.883881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.884060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.884077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.884246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.884265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.884396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.884413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.884578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.884611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.884809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.884846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.885132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.885165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.885397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.301 [2024-07-15 08:03:48.885415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.301 qpair failed and we were unable to recover it. 00:28:04.301 [2024-07-15 08:03:48.885654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.885672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.885780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.885797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.885962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.885979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.886082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.886099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.886216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.886241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.886431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.886449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.886625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.886643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.886744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.886786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.887075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.887109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.887250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.887284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.887498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.887531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.887744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.887777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.888055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.888088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.888363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.888397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.888699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.888732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.889016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.889050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.889298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.889316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.889476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.889493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.889669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.889686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.889810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.889827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.890030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.890048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.890284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.890301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.890468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.890485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.890670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.890704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.891046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.891081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.891366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.891402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.891550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.891582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.891850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.891883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.892076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.892109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.892433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.892450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.892691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.892708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.892892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.892909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.893078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.893095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.893273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.893308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.893504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.893537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.893704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.893737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.893931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.893963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.894260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.894293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.894444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.894476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.894740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.894773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.302 [2024-07-15 08:03:48.895046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.302 [2024-07-15 08:03:48.895080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.302 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.895309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.895342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.895569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.895587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.895832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.895849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.896138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.896171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.896412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.896447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.896662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.896695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.896907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.896941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.897089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.897123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.897348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.897382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.897576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.897610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.897806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.897839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.898040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.898073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.898305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.898339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.898596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.898614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.898851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.898868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.899055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.899072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.899241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.899258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.899521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.899552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.899708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.899741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.899956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.899988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.900247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.900265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.900459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.900477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.900750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.900782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.901012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.901044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.901242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.901260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.901391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.901438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.901628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.901660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.901883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.901916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.902108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.902142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.902358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.902392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.902528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.902561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.902704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.902738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.902960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.902993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.903202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.903219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.903458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.903499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.903784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.903816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.904144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.904177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.904403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.904421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.904712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.904743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.905042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.905074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.303 qpair failed and we were unable to recover it. 00:28:04.303 [2024-07-15 08:03:48.905334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.303 [2024-07-15 08:03:48.905369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.905538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.905569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.905795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.905828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.905969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.906003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.906192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.906210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.906386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.906403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.906637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.906656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.906813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.906831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.907030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.907047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.907197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.907215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.907408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.907425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.907611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.907628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.907845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.907878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.908088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.908121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.908335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.908352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.909463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.909501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.909725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.909744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.909975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.910008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.910271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.910305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.910520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.910538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.910625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.910641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.910774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.910792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.911024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.911041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.911213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.911238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.911485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.911501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.911631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.911648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.911782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.911818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.912028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.912061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.912286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.912307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.912479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.912498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.912692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.912724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.912884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.912917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.913122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.913155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.913299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.913316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.913497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.913533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.913680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.913719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.913881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.913914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.914118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.914151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.914398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.914433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.304 qpair failed and we were unable to recover it. 00:28:04.304 [2024-07-15 08:03:48.914601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.304 [2024-07-15 08:03:48.914633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.914906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.914938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.915261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.915294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.915509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.915541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.915758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.915789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.916104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.916136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.916383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.916417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.916585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.916623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.916811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.916828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.917073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.917091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.917216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.917241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.917375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.917392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.917608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.917641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.917849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.917881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.918023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.918057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.918287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.918305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.918535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.918568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.918802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.918834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.919104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.919140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.919347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.919364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.919486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.919502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.919666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.919683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.919888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.919906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.920097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.920118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.920245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.920263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.920466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.920498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.920644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.920676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.920820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.920852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.921096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.921128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.921351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.921368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.921547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.921564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.921746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.921779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.921994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.922027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.922250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.922285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.922443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.922475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.922738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.922771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.922927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.922960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.923375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.305 [2024-07-15 08:03:48.923455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.305 qpair failed and we were unable to recover it. 00:28:04.305 [2024-07-15 08:03:48.923640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.923676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.923918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.923952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.924259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.924296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.924546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.924580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.924797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.924829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.925083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.925115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.925426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.925460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.925670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.925702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.925971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.925992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.926209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.926232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.926433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.926449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.926608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.926624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.926885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.926922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.927211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.927259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.927516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.927552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.927726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.927761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.928119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.928153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.929411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.929451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.929721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.929740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.930004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.930036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.930174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.930207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.930438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.930469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.930641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.930657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.930783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.930815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.931081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.931114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.931406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.931439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.931662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.931695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.932024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.932057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.932332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.932367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.932515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.932549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.932742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.932774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.933063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.933095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.933406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.933441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.933663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.933696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.933839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.933871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.934082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.934114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.934345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.934362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.934544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.934561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.934686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.934718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.934965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.934998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.306 [2024-07-15 08:03:48.935221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.306 [2024-07-15 08:03:48.935287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.306 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.935514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.935547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.935692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.935725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.935970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.936003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.936313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.936348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.936490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.936506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.936674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.936691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.936899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.936932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.937141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.937173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.941489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.941527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.942985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.943041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.943347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.943384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.943601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.943634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.943829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.943868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.944086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.944118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.944347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.944382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.944609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.944641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.944801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.944834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.945043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.945076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.945362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.945396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.945588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.945621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.945772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.945804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.946074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.946107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.946349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.946383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.946557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.946589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.946783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.946816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.947043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.947076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.947308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.947343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.947641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.947674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.947960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.947993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.948303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.948340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.948626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.948659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.948990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.949025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.949303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.949337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.949538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.949571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.949907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.949939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.950136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.950168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.950407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.950441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.950599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.950632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.950827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.950860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.951127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.951166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.307 qpair failed and we were unable to recover it. 00:28:04.307 [2024-07-15 08:03:48.951391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.307 [2024-07-15 08:03:48.951426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.951688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.951721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.951960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.951992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.952131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.952163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.952402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.952436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.952587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.952619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.952826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.952859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.953063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.953095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.953270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.953306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.953537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.953569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.953781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.953814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.954016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.954050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.954185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.954217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.954389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.954424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.954635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.954667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.954822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.954854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.955009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.955041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.955174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.955207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.955364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.955398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.955546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.955578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.955779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.955810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.956078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.956110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.956261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.956294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.956531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.956564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.956713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.956745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.956959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.956991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.957123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.957156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.957478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.957513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.957729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.957761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.957894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.957926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.958138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.958170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.958343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.958377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.958513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.958546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.958690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.958723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.958861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.958896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.959197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.959245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.959442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.959475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.959635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.959668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.959821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.959854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.960063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.960096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.960250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.308 [2024-07-15 08:03:48.960296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.308 qpair failed and we were unable to recover it. 00:28:04.308 [2024-07-15 08:03:48.960507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.960544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.960695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.960728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.960846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.960878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.961008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.961040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.961192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.961240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.961384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.961418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.961627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.961661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.961873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.961906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.962115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.962146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.962440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.962475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.962609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.962641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.962806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.962839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.962957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.962990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.963189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.963220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.963450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.963486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.963626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.963659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.963805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.963837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.963974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.964006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.965484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.965545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.965829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.965864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.966060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.966093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.966253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.966287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.966428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.966460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.966681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.966714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.966849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.966882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.967023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.967055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.967275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.967316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.967442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.967474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.967665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.967697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.967900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.967930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.968141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.968173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.968394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.968427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.968625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.968657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.968861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.968890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.969090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.969120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.309 qpair failed and we were unable to recover it. 00:28:04.309 [2024-07-15 08:03:48.969338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.309 [2024-07-15 08:03:48.969370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.969586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.969616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.969806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.969836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.970020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.970051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.970189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.970219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.970355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.970386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.970601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.970634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.970774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.970806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.970944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.970977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.971180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.971212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.971373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.971406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.971599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.971631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.971858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.971890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.972086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.972119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.972333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.972366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.972511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.972542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.972668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.972701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.972840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.972872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.973132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.973164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.973313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.973346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.973499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.973531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.973669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.973699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.973904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.973937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.974063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.974095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.974219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.974265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.974593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.974628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.974888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.974919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.975066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.975098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.975240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.975274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.975533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.975565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.975850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.975882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.976035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.976067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.976213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.976285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.976497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.976528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.976711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.976741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.976970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.977002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.977201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.977248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.977470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.977501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.977731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.977764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.977963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.977995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.310 [2024-07-15 08:03:48.978196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.310 [2024-07-15 08:03:48.978241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.310 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.978369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.978399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.978528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.978557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.978750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.978827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.979059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.979094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.979237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.979272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.979422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.979454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.979649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.979681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.979834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.979867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.980015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.980046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.980254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.980289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.980500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.980533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.980653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.980685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.980852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.980885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.981093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.981126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.981267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.981299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.981435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.981468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.981594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.981625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.981762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.981794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.982054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.982093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.982237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.982270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.982464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.982496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.982657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.982689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.982824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.982856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.983060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.983092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.983353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.983387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.983529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.983562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.983704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.983737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.983884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.983916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.984061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.984093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.984296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.984330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.984595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.984627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.984753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.984798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.984922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.984955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.985175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.985206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.985349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.985382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.985513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.985545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.985764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.985797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.985930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.985963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.986114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.986145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.311 qpair failed and we were unable to recover it. 00:28:04.311 [2024-07-15 08:03:48.986361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.311 [2024-07-15 08:03:48.986396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.986543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.986575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.986706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.986739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.986865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.986897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.987019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.987051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.987176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.987208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.987355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.987387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.987535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.987568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.987698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.987730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.987940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.987973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.988161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.988194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.988322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.988354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.988633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.988665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.988852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.988884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.989020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.989052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.989254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.989287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.989411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.989443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.989569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.989601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.989749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.989780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.990015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.990052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.990204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.990251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.990451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.990482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.990612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.990643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.990829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.990862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.990992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.991024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.991153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.991185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.991333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.991365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.991573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.991607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.991878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.991910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.992034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.992066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.992209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.992252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.992386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.992417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.992556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.992588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.992810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.992842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.993000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.993032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.993180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.993213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.993453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.993486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.993608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.993641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.993824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.993856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.993991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.312 [2024-07-15 08:03:48.994022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.312 qpair failed and we were unable to recover it. 00:28:04.312 [2024-07-15 08:03:48.994248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.994282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.994507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.994539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.994739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.994770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.994973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.995004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.995311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.995344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.995597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.995629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.995822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.995854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.996177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.996209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.996335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.996366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.996545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.996576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.996718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.996750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.996869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.996901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.997017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.997048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.997263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.997295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.997511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.997543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.997670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.997701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.997904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.997934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.998065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.998096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.998264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.998297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.998443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.998480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.998682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.998713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.998900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.998932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.999124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.999155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.999287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.999320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.999527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.999560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.999683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.999714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:48.999967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:48.999999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:49.000140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:49.000173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:49.000387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:49.000420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:49.000558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:49.000591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:49.000720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:49.000752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:49.000945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:49.000976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:49.001162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:49.001195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:49.001340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:49.001373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:49.001492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.313 [2024-07-15 08:03:49.001524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.313 qpair failed and we were unable to recover it. 00:28:04.313 [2024-07-15 08:03:49.001731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.001763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.001883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.001914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.002108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.002142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.002285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.002318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.002446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.002479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.002674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.002707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.002838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.002870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.003012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.003044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.003213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.003258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.003382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.003414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.003610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.003642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.003856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.003886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.004000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.004032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.004237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.004270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.004460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.004492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.004615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.004646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.004839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.004871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.005010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.005042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.005269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.005302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.005510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.005540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.005754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.005786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.005912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.005944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.006073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.006104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.006240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.006271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.006413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.006449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.006593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.006625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.006830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.006862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.007102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.007134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.007265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.007297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.007430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.007462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.007577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.007609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.007727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.007759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.007880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.007912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.008109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.008140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.008324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.008357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.008542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.008574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.008698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.008729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.008928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.008959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.009186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.009217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.009352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.314 [2024-07-15 08:03:49.009383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.314 qpair failed and we were unable to recover it. 00:28:04.314 [2024-07-15 08:03:49.009532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.009564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.009729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.009761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.009887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.009919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.010048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.010079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.010195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.010253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.010370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.010402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.010519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.010551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.010759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.010790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.010975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.011007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.011131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.011163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.011357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.011389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.011611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.011643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.011826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.011857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.011981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.012013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.012200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.012242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.012377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.012408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.012596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.012628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.012751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.012784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.013004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.013036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.013248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.013281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.013483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.013518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.013644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.013675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.013937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.014007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.014145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.014180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.014314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.014355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.014548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.014582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.014792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.014825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.014972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.015003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.015206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.015250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.015369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.015401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.015529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.015561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.015697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.015729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.015850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.015883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.016061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.016093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.016251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.016285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.016420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.016453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.016587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.016619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.016812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.016843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.016979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.017012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.315 [2024-07-15 08:03:49.017137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.315 [2024-07-15 08:03:49.017169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.315 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.017319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.017353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.017472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.017505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.017698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.017731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.017880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.017912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.018070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.018102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.018240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.018274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.018432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.018464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.018600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.018632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.018764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.018797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.018927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.018959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.019080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.019112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.019251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.019284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.019396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.019429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.019616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.019647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.019855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.019888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.020081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.020113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.020242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.020275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.020506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.020539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.020670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.020702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.020843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.020875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.020990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.021021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.021161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.021193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.021404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.021437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.021664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.021695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.021838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.021875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.022011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.022043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.022179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.022211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.022451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.022483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.022626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.022659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.022779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.022810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.023002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.023033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.023177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.023209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.023342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.023373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.023495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.023528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.023724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.023756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.023875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.023907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.024092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.024123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.024381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.597 [2024-07-15 08:03:49.024414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.597 qpair failed and we were unable to recover it. 00:28:04.597 [2024-07-15 08:03:49.024565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.024597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.024725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.024757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.024874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.024905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.025065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.025096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.025290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.025322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.025545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.025578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.025768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.025800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.025992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.026023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.026215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.026256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.026512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.026544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.026743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.026775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.026899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.026931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.027129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.027160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.027316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.027349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.027534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.027565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.027814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.027845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.028043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.028074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.028214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.028257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.028404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.028436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.028556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.028587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.028716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.028749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.028938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.028970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.029089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.029120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.029367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.029400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.029559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.029589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.029727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.029758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.029892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.029929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.030120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.030151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.030289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.030322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.030505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.030537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.030722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.030754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.030938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.030968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.031084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.031115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.031280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.031314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.031447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.031478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.031671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.031702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.031827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.031859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.031975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.032006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.032135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.032167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.598 qpair failed and we were unable to recover it. 00:28:04.598 [2024-07-15 08:03:49.032363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.598 [2024-07-15 08:03:49.032395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.032526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.032558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.032699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.032732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.032865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.032897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.033025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.033057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.033185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.033216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.033349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.033380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.033654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.033685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.033882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.033913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.034032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.034063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.034194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.034237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.034420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.034451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.034636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.034667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.034867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.034898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.035038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.035071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.035321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.035352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.035466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.035498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.035702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.035733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.035928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.035959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.036167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.036199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.036382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.036415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.036569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.036601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.036716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.036747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.036878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.036910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.037161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.037193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.037323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.037354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.037491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.037523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.037708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.037745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.037876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.037907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.038043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.038074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.038205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.038244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.038451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.038483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.038689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.038720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.038866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.038898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.039168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.039200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.039418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.039450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.039667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.039699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.039899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.039930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.599 [2024-07-15 08:03:49.040050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.599 [2024-07-15 08:03:49.040082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.599 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.040206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.040268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.040394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.040425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.040617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.040649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.040766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.040797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.041006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.041036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.041176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.041207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.041401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.041433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.041630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.041662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.041767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.041798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.041999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.042031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.042248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.042280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.042419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.042450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.042661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.042692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.042894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.042926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.043131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.043162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.043309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.043342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.043503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.043534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.043721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.043753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.043884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.043916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.044194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.044234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.044424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.044455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.044638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.044671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.044812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.044843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.045034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.045067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.045314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.045347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.045477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.045508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.045716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.045747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.045893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.045924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.046057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.046094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.046297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.046329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.046455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.046486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.046607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.046638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.046768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.046799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.046920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.046951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.047082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.047114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.047296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.047329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.047476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.047507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.047707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.047739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.047863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.047894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.600 [2024-07-15 08:03:49.048018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.600 [2024-07-15 08:03:49.048049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.600 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.048264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.048297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.048480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.048512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.048642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.048674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.048859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.048891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.049076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.049107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.049223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.049263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.049448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.049480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.049598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.049629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.049745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.049776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.049904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.049935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.050068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.050099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.050283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.050314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.050431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.050462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.050651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.050682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.050862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.050893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.051033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.051065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.051261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.051293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.051425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.051455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.051572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.051603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.051786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.051817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.052006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.052037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.052150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.052181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.052321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.052352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.052544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.052576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.052754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.052786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.052997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.053028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.053171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.053201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.053435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.053466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.053663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.053699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.053894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.053926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.054040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.054071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.054201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.054243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.601 qpair failed and we were unable to recover it. 00:28:04.601 [2024-07-15 08:03:49.054382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.601 [2024-07-15 08:03:49.054413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.054535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.054566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.054688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.054719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.054849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.054880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.055008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.055039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.055164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.055195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.055371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.055444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.055597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.055632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.055905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.055938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.056068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.056100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.056255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.056289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.056423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.056454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.056658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.056689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.056816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.056847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.057032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.057064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.057181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.057212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.057387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.057419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.057547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.057579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.057715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.057747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.057871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.057902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.058025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.058056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.058179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.058211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.058351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.058384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.058509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.058541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.058660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.058692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.058830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.058862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.058993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.059024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.059163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.059194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.059392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.059424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.059550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.059582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.059829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.059861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.060048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.060087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.060282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.060316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.060440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.060473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.060603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.060635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.060828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.060860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.060989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.061028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.061142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.061173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.061302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.061335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.061463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.061495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.602 qpair failed and we were unable to recover it. 00:28:04.602 [2024-07-15 08:03:49.061614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.602 [2024-07-15 08:03:49.061647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.061784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.061816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.062000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.062031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.062236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.062269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.062390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.062422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.062620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.062651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.062768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.062800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.062991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.063023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.063146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.063178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.063388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.063421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.063553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.063585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.063770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.063802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.063983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.064015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.064135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.064167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.064311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.064344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.064522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.064554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.064697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.064729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.064845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.064877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.065012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.065057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.065174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.065205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.065398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.065431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.065614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.065646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.065768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.065799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.065943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.065975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.066101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.066134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.066260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.066293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.066402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.066451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.066581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.066613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.066714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.066746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.066869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.066900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.067087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.067119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.067305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.067339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.067469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.067501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.067616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.067647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.068981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.069032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.069243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.069278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.069405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.069445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.069645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.069677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.069894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.069926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.070117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.070151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.603 [2024-07-15 08:03:49.070326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.603 [2024-07-15 08:03:49.070359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.603 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.070615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.070648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.070753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.070785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.070970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.071002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.071138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.071171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.071297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.071329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.071459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.071491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.071629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.071662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.071854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.071885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.072011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.072043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.072260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.072294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.072479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.072511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.072699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.072731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.072856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.072888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.073024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.073056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.073175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.073206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.073404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.073436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.073547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.073578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.073710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.073744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.073871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.073903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.074018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.074050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.074337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.074370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.074511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.074543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.074736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.074768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.074916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.074948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.075203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.075245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.075435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.075467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.075584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.075617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.075741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.075772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.075985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.076017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.076277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.076310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.076450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.076481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.076680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.076712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.076924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.076956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.077148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.077180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.077331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.077362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.077571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.077604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.077734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.077766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.077960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.077991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.078105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.078136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.604 qpair failed and we were unable to recover it. 00:28:04.604 [2024-07-15 08:03:49.078277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.604 [2024-07-15 08:03:49.078310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.078451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.078482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.078595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.078626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.078758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.078789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.078986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.079017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.079219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.079259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.079376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.079407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.079549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.079581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.079716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.079748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.079889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.079920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.080051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.080084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.080332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.080365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.080556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.080587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.080701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.080734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.080925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.080957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.081069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.081101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.081221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.081260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.081438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.081470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.081667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.081699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.081834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.081866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.082013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.082044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.082251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.082284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.082487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.082519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.082650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.082687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.082942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.082974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.083094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.083126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.083324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.083355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.083493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.083525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.083655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.083688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.083826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.083857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.083970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.084001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.084192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.084223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.084356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.084387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.084589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.084621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.605 qpair failed and we were unable to recover it. 00:28:04.605 [2024-07-15 08:03:49.084830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.605 [2024-07-15 08:03:49.084861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.085046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.085077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.085335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.085369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.085526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.085559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.085689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.085719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.085895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.085926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.086055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.086086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.086268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.086300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.086484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.086515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.086706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.086737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.086869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.086913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.087043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.087074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.087208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.087247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.087446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.087477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.087689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.087721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.087834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.087865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.087987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.088018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.088134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.088165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.088305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.088336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.088453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.088483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.088668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.088699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.088827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.088859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.089075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.089107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.089233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.089265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.089451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.089482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.089668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.089699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.089840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.089871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.089994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.090026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.090212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.090253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.090390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.090427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.090554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.090584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.090776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.090808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.090984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.091015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.091141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.091172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.091316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.091348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.091489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.091520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.091646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.091677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.091812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.091843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.092015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.092046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.092234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.606 [2024-07-15 08:03:49.092267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.606 qpair failed and we were unable to recover it. 00:28:04.606 [2024-07-15 08:03:49.092382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.092413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.092594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.092625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.092813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.092844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.092982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.093013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.093206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.093258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.093451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.093482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.093613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.093644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.093756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.093788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.093991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.094023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.094205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.094246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.094370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.094401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.094578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.094609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.094803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.094834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.094961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.094991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.095128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.095158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.095281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.095313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.095446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.095478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.095592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.095623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.095754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.095785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.095911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.095943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.096130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.096161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.096278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.096310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.096524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.096556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.096676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.096707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.096830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.096861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.097057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.097088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.097294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.097326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.097471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.097503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.097702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.097734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.097851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.097887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.098073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.098104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.098238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.098270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.098382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.098414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.098608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.098639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.098776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.098807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.098923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.098954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.099072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.099103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.099237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.099269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.099402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.099433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.607 [2024-07-15 08:03:49.099569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.607 [2024-07-15 08:03:49.099599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.607 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.099721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.099752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.100002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.100033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.100146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.100178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.100326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.100358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.100485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.100515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.100630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.100661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.100773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.100804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.100921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.100952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.101137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.101167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.101304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.101336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.101462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.101493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.101631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.101661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.101791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.101822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.102001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.102032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.102214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.102259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.102459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.102489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.102675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.102707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.102832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.102863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.103055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.103087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.103207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.103250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.103433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.103464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.103649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.103681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.103912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.103943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.104121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.104152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.104362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.104394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.104506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.104537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.104736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.104768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.104952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.104983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.105111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.105141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.105322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.105363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.105495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.105526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.105703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.105733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.105866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.105897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.106025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.106056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.106189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.106220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.106358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.106389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.106662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.106693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.106817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.106849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.106962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.106993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.608 [2024-07-15 08:03:49.107122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.608 [2024-07-15 08:03:49.107153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.608 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.107267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.107299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.107417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.107447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.107581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.107613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.107740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.107771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.107901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.107932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.108055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.108087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.108214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.108258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.108508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.108539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.108666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.108697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.108886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.108917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.109040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.109071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.109187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.109218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.109361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.109391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.109521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.109551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.109673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.109705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.109879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.109910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.110094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.110125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.110321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.110352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.110481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.110512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.110623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.110654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.110766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.110796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.110993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.111024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.111150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.111181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.111333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.111364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.111501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.111531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.111768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.111799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.111909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.111940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.112124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.112155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.112373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.112406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.112587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.112625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.112806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.112837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.113031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.113062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.113179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.113210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.113425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.113456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.113591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.113622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.113813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.113843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.114040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.114071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.114188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.114219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.114350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.114380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.609 [2024-07-15 08:03:49.114502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.609 [2024-07-15 08:03:49.114532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.609 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.114650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.114681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.114872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.114902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.115115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.115145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.115289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.115322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.115438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.115469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.115599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.115630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.115812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.115843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.115971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.116003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.116145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.116176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.116408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.116440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.116622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.116653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.116800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.116830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.117031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.117062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.117180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.117211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.117336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.117368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.117493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.117525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.117707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.117738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.117865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.117897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.118164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.118195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.118345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.118378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.118505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.118537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.118641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.118672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.118878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.118910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.119089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.119119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.119316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.119348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.119573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.119605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.119797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.119828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.120009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.120039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.120221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.120259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.120470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.120506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.120688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.120720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.120853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.120884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.121074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.121105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.121298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.610 [2024-07-15 08:03:49.121329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.610 qpair failed and we were unable to recover it. 00:28:04.610 [2024-07-15 08:03:49.121548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.121579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.121761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.121792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.121982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.122013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.122144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.122174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.122329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.122361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.122509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.122541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.122724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.122756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.122886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.122917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.123055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.123086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.123223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.123264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.123371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.123402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.123585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.123617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.123751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.123782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.123900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.123931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.124071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.124102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.124283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.124316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.124436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.124468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.124594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.124625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.124874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.124906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.125026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.125057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.125190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.125221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.125361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.125392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.125508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.125539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.125659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.125690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.125818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.125849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.126051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.126082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.126210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.126248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.126369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.126400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.126646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.126677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.126877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.126909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.127035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.127067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.127192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.127223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.127414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.127445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.127571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.127602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.127786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.127817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.128013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.128049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.128243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.128275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.128400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.128431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.128615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.128646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.128762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.128793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.611 qpair failed and we were unable to recover it. 00:28:04.611 [2024-07-15 08:03:49.128923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.611 [2024-07-15 08:03:49.128954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.129129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.129159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.129344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.129376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.129559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.129591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.129729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.129760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.129958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.129989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.130173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.130203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.130405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.130437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.130630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.130661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.130847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.130878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.130993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.131024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.131166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.131197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.131333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.131364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.131477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.131508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.131701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.131732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.131925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.131956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.132081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.132112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.132315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.132347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.132570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.132601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.132804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.132835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.132968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.132999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.133109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.133141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.133269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.133301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.133430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.133461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.133591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.133621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.133739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.133770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.133905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.133936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.134068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.134099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.134289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.134320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.134435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.134465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.134641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.134672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.134854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.134885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.135004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.135036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.135171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.135203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.135335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.135366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.135478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.135514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.135644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.135675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.135800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.135830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.136121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.136152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.136358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.136389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.612 qpair failed and we were unable to recover it. 00:28:04.612 [2024-07-15 08:03:49.136515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.612 [2024-07-15 08:03:49.136546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.136724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.136754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.137028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.137062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.137267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.137301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.137497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.137528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.137660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.137692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.137879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.137911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.138040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.138070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.138316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.138347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.138547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.138578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.138695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.138726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.138876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.138908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.139096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.139127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.139339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.139371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.139567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.139599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.139731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.139763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.139952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.139984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.140124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.140155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.140284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.140316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.140436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.140467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.140645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.140677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.140921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.140952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.141095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.141126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.141256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.141288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.141426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.141457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.141637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.141668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.141814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.141845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.141963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.141994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.142130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.142161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.142396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.142428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.142553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.142583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.142701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.142732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.142868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.142899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.143021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.143051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.143176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.143206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.143418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.143459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.143585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.143616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.143730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.143761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.143943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.143974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.144260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.144293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.613 qpair failed and we were unable to recover it. 00:28:04.613 [2024-07-15 08:03:49.144438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.613 [2024-07-15 08:03:49.144470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.144606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.144638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.144859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.144890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.145081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.145112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.145268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.145300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.145425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.145456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.145583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.145613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.145796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.145826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.145949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.145980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.146097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.146128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.146250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.146281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.146397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.146429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.146626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.146656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.146867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.146898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.147029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.147060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.147238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.147270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.147467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.147498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.147678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.147710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.147900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.147931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.148050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.148081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.148195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.148233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.148425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.148457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.148582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.148614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.148796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.148828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.148951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.148983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.149250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.149282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.149417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.149448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.149577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.149608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.149736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.149767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.149906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.149937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.150113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.150145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.150266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.150297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.150414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.150446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.150624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.150656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.150774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.150804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.614 qpair failed and we were unable to recover it. 00:28:04.614 [2024-07-15 08:03:49.150995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.614 [2024-07-15 08:03:49.151032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.151223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.151265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.151380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.151411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.151540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.151572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.151702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.151732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.152003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.152034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.152146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.152177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.152361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.152393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.152585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.152617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.152729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.152760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.153006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.153037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.153219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.153262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.153442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.153474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.153599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.153631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.153769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.153801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.153987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.154018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.154252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.154285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.154405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.154436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.154628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.154660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.154885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.154917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.155046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.155076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.155263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.155296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.155424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.155455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.155589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.155621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.155801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.155831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.156046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.156076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.156279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.156310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.156500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.156533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.156724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.156755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.156951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.156983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.157165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.157196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.157393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.157424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.157553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.157583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.157765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.157796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.158079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.158110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.158217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.158254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.158439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.158470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.158697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.158728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.158860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.158891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.159082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.159113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.615 qpair failed and we were unable to recover it. 00:28:04.615 [2024-07-15 08:03:49.159254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.615 [2024-07-15 08:03:49.159291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.159511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.159542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.159739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.159770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.159896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.159928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.160058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.160089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.160221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.160262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.160456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.160488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.160606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.160637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.160753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.160785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.160949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.160980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.161163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.161194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.161422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.161453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.161638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.161670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.161794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.161825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.161971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.162002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.162146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.162177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.162343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.162375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.162489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.162520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.162659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.162691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.162895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.162925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.163109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.163139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.163274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.163306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.163431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.163462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.163662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.163694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.163889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.163919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.164062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.164093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.164238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.164269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.164489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.164521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.164649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.164680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.164807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.164838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.165023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.165054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.165182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.165213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.165354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.165386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.165503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.165534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.165722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.165754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.167475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.167532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.167741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.167775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.167985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.168017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.168208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.168252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.168376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.168408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.616 qpair failed and we were unable to recover it. 00:28:04.616 [2024-07-15 08:03:49.168599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.616 [2024-07-15 08:03:49.168638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.168772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.168804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.168924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.168955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.169070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.169101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.169221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.169264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.169451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.169482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.169663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.169693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.169832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.169864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.170076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.170107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.170359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.170392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.170520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.170551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.170745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.170777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.170901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.170931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.171063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.171094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.171244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.171276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.171460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.171491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.171689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.171720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.171995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.172026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.172142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.172173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.172296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.172329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.172511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.172541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.172663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.172694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.172833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.172865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.173065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.173096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.173223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.173265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.173469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.173500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.173744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.173775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.173908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.173939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.174068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.174100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.174210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.174251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.174450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.174482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.174682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.174714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.174834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.174865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.175002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.175034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.175147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.175178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.175312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.175344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.175453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.175485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.175603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.175634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.175829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.175861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.617 [2024-07-15 08:03:49.175986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.617 [2024-07-15 08:03:49.176017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.617 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.176215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.176256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.176461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.176493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.176691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.176722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.176910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.176941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.177119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.177152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.177294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.177326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.177447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.177478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.177598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.177630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.177740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.177770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.177951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.177982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.178105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.178136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.178273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.178307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.178442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.178473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.178589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.178619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.178738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.178769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.178893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.178924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.179044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.179075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.179201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.179239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.179369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.179400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.179590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.179621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.179732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.179764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.179885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.179917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.180161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.180192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.180314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.180346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.180537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.180568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.180770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.180801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.180942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.180973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.181108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.181145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.181333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.181366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.181550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.181582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.181702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.181734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.181874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.181906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.182097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.182128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.182251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.182283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.618 [2024-07-15 08:03:49.182477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.618 [2024-07-15 08:03:49.182508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.618 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.182623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.182654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.182778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.182809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.183006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.183037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.183223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.183263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.183406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.183437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.183549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.183581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.183710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.183741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.183927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.183959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.184138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.184169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.184335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.184384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.184492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.184524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.184654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.184685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.184869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.184900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.185014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.185046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.185179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.185210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.185337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.185369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.185492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.185524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.185654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.185686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.185808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.185839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.186031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.186062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.186182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.186213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.186355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.186387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.186579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.186610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.186805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.186837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.186956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.186988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.187133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.187169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.187307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.187339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.187475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.187506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.187628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.187659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.187778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.187809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.187934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.187965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.188158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.188189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.188394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.188432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.188563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.188594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.188718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.188749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.188968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.189000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.189113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.189144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.189326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.189358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.189575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.619 [2024-07-15 08:03:49.189606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.619 qpair failed and we were unable to recover it. 00:28:04.619 [2024-07-15 08:03:49.189830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.189862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.190059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.190091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.190207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.190248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.190432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.190463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.190586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.190617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.190743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.190774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.190901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.190933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.191082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.191114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.191279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.191311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.191506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.191537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.191671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.191702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.191884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.191915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.192031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.192061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.192196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.192255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.192397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.192428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.192554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.192585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.192702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.192733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.192848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.192879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.192997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.193028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.193210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.193252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.193412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.193443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.193556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.193587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.193717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.193749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.193882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.193912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.194050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.194081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.194197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.194237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.194362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.194394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.194579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.194610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.194745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.194776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.194892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.194923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.195140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.195172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.195318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.195350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.195479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.195510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.195641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.195677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.195802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.195833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.196010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.196041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.196151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.196182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.196394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.196425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.196615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.196646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.196789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.196820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.620 qpair failed and we were unable to recover it. 00:28:04.620 [2024-07-15 08:03:49.196934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.620 [2024-07-15 08:03:49.196966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.197101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.197132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.197326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.197363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.197564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.197595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.197710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.197741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.197858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.197889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.198010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.198042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.198237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.198270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.198400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.198431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.198648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.198679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.198925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.198955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.199070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.199101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.199232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.199263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.199390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.199421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.199600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.199632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.199755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.199786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.199915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.199947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.200075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.200106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.200270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.200302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.200451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.200482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.200620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.200653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.200780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.200811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.200998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.201029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.201242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.201274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.201473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.201505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.201687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.201718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.201845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.201876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.202001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.202031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.202216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.202260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.202392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.202423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.202541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.202571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.202773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.202804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.202995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.203026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.203138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.203179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.203446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.203478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.203596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.203627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.203745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.203776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.203908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.203939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.204115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.204146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.204256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.621 [2024-07-15 08:03:49.204288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.621 qpair failed and we were unable to recover it. 00:28:04.621 [2024-07-15 08:03:49.204423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.204455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.204572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.204603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.204727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.204757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.204916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.204947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.205079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.205110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.205271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.205304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.205429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.205462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.205655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.205686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.205871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.205902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.206081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.206112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.206315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.206347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.206597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.206629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.206753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.206783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.206978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.207010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.207146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.207177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.207370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.207402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.207585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.207616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.207744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.207774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.207919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.207950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.208085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.208115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.208312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.208346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.208486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.208518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.208631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.208663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.208792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.208823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.208940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.208972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.209148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.209179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.209305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.209336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.209453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.209485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.209690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.209721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.209910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.209941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.210120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.210152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.210285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.210318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.210443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.210475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.210586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.210622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.210748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.210779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.211022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.211054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.211244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.211276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.211473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.211504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.211685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.211716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.622 [2024-07-15 08:03:49.211836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.622 [2024-07-15 08:03:49.211868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.622 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.211996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.212027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.212156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.212187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.212382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.212413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.212538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.212569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.212680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.212711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.212892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.212923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.213049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.213081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.213274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.213307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.213434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.213465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.213594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.213625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.213742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.213774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.213892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.213923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.214035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.214066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.214202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.214261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.214447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.214478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.214593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.214625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.214742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.214773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.214958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.214989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.215151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.215182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.215342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.215373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.215577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.215608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.215736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.215768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.215915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.215946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.216067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.216097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.216211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.216250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.216403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.216433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.216634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.216665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.216844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.216875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.216985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.217017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.217136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.217168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.217292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.217325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.217520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.217552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.217666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.217697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.217902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.623 [2024-07-15 08:03:49.217938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.623 qpair failed and we were unable to recover it. 00:28:04.623 [2024-07-15 08:03:49.218183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.218215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.218421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.218454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.218578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.218610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.218722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.218753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.218887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.218919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.219098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.219130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.219309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.219342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.219483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.219514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.219723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.219754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.219897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.219928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.220058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.220089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.220202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.220240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.220434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.220466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.220654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.220685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.220804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.220836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.221098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.221130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.221308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.221339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.221469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.221500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.221632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.221664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.221789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.221821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.221941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.221972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.222106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.222137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.222317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.222351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.222468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.222500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.222683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.222715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.222859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.222890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.223012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.223044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.223159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.223191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.223321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.223353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.223470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.223501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.223703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.223735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.223846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.223877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.224077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.224109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.224220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.224260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.224452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.224484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.224626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.224658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.224803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.224834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.224968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.225000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.225193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.225236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.225365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.624 [2024-07-15 08:03:49.225401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.624 qpair failed and we were unable to recover it. 00:28:04.624 [2024-07-15 08:03:49.225513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.225544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.225663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.225694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.225816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.225847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.225968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.225999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.226113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.226145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.226259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.226291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.226541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.226572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.226692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.226723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.226851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.226882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.227127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.227159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.227341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.227373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.227495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.227526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.227720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.227751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.227951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.227982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.228164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.228195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.228319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.228351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.228471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.228502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.228766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.228797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.228916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.228946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.229054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.229085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.229202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.229241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.229359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.229390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.229502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.229533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.229665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.229696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.229812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.229843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.230036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.230067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.230279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.230314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.230505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.230537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.230653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.230684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.230810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.230841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.230963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.230994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.231109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.231140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.231324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.231356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.231572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.231604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.231719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.231751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.231863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.231895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.232081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.232112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.232243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.232275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.232453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.232484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.232677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.625 [2024-07-15 08:03:49.232713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.625 qpair failed and we were unable to recover it. 00:28:04.625 [2024-07-15 08:03:49.232853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.232884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.233007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.233038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.233261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.233294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.233404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.233435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.233635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.233667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.233806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.233837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.234032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.234063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.234250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.234283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.234464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.234497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.234695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.234726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.234933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.234965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.235102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.235133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.235251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.235283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.235503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.235534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.235661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.235692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.235811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.235842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.236086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.236118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.236253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.236285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.236474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.236506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.236689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.236720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.236843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.236874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.237002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.237033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.237179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.237215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.237403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.237436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.237633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.237665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.237798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.237829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.237960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.237993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.238108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.238139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.238268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.238301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.238444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.238475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.238664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.238696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.238883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.238914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.239048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.239079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.239210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.239249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.239433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.239465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.239585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.239616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.239803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.239835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.239965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.239996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.240125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.240156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.626 [2024-07-15 08:03:49.240279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.626 [2024-07-15 08:03:49.240320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.626 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.240457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.240488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.240741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.240772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.240958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.240990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.241119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.241151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.241290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.241322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.241452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.241483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.241677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.241708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.241897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.241929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.242047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.242078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.242263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.242300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.242493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.242525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.242642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.242674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.242796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.242828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.242949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.242981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.243161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.243192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.243416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.243448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.243575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.243605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.243808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.243840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.243960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.243992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.244170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.244201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.244351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.244382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.244575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.244607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.244721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.244752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.244935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.244965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.245080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.245111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.245236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.245268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.245469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.245502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.245624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.245655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.245781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.245812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.245937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.245968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.246095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.246126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.246269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.246302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.246444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.246476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.246599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.246630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.627 [2024-07-15 08:03:49.246819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.627 [2024-07-15 08:03:49.246850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.627 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.246972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.247003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.247185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.247217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.247338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.247370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.247557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.247589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.247778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.247815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.247994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.248025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.248172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.248203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.248446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.248478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.248612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.248643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.248783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.248814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.248929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.248960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.249143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.249174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.249300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.249333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.249454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.249485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.249693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.249725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.249856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.249887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.250087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.250118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.250308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.250341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.250598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.250629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.250818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.250849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.250980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.251011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.251198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.251238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.251420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.251452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.251635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.251666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.251870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.251900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.252083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.252114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.252335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.252366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.252560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.252591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.252833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.252863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.252989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.253020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.253199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.253236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.253357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.253389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.253582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.253613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.253819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.253850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.254002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.254033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.254213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.254272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.254465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.254496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.254635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.254667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.254849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.254880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.628 [2024-07-15 08:03:49.255014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.628 [2024-07-15 08:03:49.255044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.628 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.255293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.255325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.255505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.255537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.255675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.255706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.255990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.256021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.256157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.256193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.256345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.256376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.256561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.256593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.256719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.256750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.256862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.256892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.257116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.257147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.257280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.257312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.257496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.257527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.257714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.257745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.257998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.258029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.258239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.258271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.258474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.258504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.258699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.258730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.258859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.258891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.259143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.259174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.259322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.259354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.259470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.259501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.259679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.259711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.259962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.259993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.260185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.260216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.260437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.260470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.260666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.260698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.260824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.260855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.260985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.261017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.261193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.261238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.261367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.261398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.261659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.261690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.261884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.261917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.262107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.262138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.262334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.262365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.262644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.262675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.262800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.262831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.262956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.262987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.263125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.263157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.263346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.263378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.629 [2024-07-15 08:03:49.263494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.629 [2024-07-15 08:03:49.263525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.629 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.263657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.263687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.263824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.263856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.264060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.264092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.264213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.264252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.264357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.264394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.264583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.264614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.264803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.264835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.265037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.265068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.265186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.265216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.265359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.265390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.265571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.265603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.265719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.265749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.265872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.265904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.266083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.266114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.266250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.266282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.266464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.266495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.266676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.266707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.266892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.266923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.267109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.267140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.267329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.267361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.267544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.267575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.267760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.267791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.267908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.267939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.268070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.268101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.268260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.268292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.268409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.268439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.268626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.268656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.268905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.268936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.269064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.269094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.269207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.269247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.269365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.269396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.269635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.269706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.270000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.270035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.270219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.270268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.270387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.270419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.270550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.270581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.270706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.270737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.270918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.270949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.271157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.271188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.630 [2024-07-15 08:03:49.271446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.630 [2024-07-15 08:03:49.271479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.630 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.271659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.271691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.271809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.271839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.272053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.272084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.272206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.272248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.272433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.272465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.272663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.272695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.272820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.272851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.272982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.273013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.273156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.273187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.273448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.273481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.273726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.273757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.273871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.273903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.274078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.274109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.274247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.274279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.274461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.274492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.274742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.274773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.274967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.274998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.275125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.275155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.275320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.275353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.275532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.275563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.275767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.275799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.275930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.275961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.276084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.276116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.276256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.276288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.276541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.276574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.276784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.276815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.277018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.277049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.277264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.277296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.277503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.277534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.277645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.277676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.277895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.277926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.278038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.278074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.278258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.278290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.278428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.278458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.278576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.278607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.631 qpair failed and we were unable to recover it. 00:28:04.631 [2024-07-15 08:03:49.278879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.631 [2024-07-15 08:03:49.278911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.279097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.279127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.279308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.279339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.279544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.279576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.279755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.279787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.279921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.279952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.280076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.280107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.280359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.280391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.280598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.280630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.280776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.280807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.280930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.280961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.281212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.281263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.281536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.281568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.281768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.281800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.281984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.282016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.282246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.282278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.282400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.282432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.282655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.282685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.282795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.282826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.283012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.283043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.283252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.283284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.283468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.283500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.283606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.283637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.283782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.283813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.284004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.284036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.284160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.284191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.284343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.284374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.284542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.284573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.284704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.284736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.284916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.284947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.285076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.285107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.285253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.285286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.285487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.285520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.285718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.285749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.285877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.285909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.286183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.286214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.286416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.286453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.286639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.286670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.286841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.632 [2024-07-15 08:03:49.286873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.632 qpair failed and we were unable to recover it. 00:28:04.632 [2024-07-15 08:03:49.287048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.287080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.287258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.287291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.287491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.287522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.287683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.287714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.287842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.287874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.288080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.288111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.288387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.288418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.288666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.288697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.288960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.288990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.289115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.289146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.289284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.289316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.289540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.289572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.289788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.289820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.290090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.290121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.290311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.290342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.290571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.290602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.290873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.290903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.291148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.291179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.291320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.291352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.291620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.291651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.291847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.291879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.292072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.292102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.292291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.292323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.292502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.292532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.292652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.292683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.292885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.292917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.293059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.293089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.293269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.293301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.293503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.293534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.293718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.293749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.293995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.294026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.294153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.294184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.294388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.294421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.294536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.294568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.294835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.294865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.295088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.295119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.295313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.295344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.295478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.295515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.295711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.295742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.633 [2024-07-15 08:03:49.295856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.633 [2024-07-15 08:03:49.295887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.633 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.296080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.296111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.296295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.296329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.296478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.296508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.296640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.296670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.296915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.296946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.297218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.297264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.297485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.297517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.297651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.297682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.297885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.297915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.298161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.298192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.298478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.298511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.298723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.298755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.298888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.298919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.299209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.299246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.299425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.299456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.299585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.299616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.299887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.299918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.300137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.300169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.300397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.300429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.300556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.300587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.300720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.300751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.300869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.300900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.301059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.301091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.301345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.301378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.301650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.301681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.301872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.301903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.302043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.302074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.302323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.302354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.302544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.302575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.302769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.302801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.302972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.303003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.303271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.303302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.303444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.303475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.303666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.303697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.303827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.303858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.303993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.304024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.304223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.304263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.304461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.304497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.304695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.304726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.304972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.634 [2024-07-15 08:03:49.305004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.634 qpair failed and we were unable to recover it. 00:28:04.634 [2024-07-15 08:03:49.305204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.305244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.305497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.305528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.305720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.305750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.305890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.305921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.306041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.306072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.306197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.306237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.306508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.306539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.306717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.306748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.306894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.306924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.307101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.307132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.307378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.307409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.307634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.307666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.307881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.307912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.308051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.308082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.308278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.308310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.308556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.308588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.308729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.308760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.308939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.308970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.309150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.309181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.309374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.309406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.309582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.309613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.309731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.309763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.310027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.310058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.310175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.310206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.310364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.310396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.310583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.310614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.310745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.310777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.310899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.310931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.311123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.311154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.311418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.311450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.311631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.311663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.311790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.311821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.312022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.312053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.312238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.312269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.312447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.312479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.312594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.312625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.312825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.312856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.313125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.313160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.313436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.313468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.635 qpair failed and we were unable to recover it. 00:28:04.635 [2024-07-15 08:03:49.313612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.635 [2024-07-15 08:03:49.313642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.313914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.313945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.314168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.314199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.314335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.314366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.314489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.314521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.314771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.314803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.315075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.315106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.315254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.315286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.315505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.315537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.315727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.315758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.316021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.316052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.316259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.316291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.316483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.316515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.316740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.316772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.316950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.316981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.317185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.317217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.317416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.317447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.317692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.317724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.317937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.317969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.318165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.318196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.318313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.318345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.318492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.318523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.318649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.318681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.318870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.318901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.319086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.319117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.319304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.319337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.319454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.319485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.319597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.319629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.319897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.319929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.320108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.320138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.320322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.320353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.320584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.320615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.320796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.320826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.321008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.321039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.321210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.636 [2024-07-15 08:03:49.321265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.636 qpair failed and we were unable to recover it. 00:28:04.636 [2024-07-15 08:03:49.321467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.321499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.321630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.321661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.321864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.321895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.322077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.322113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.322305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.322337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.322584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.322616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.322834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.322865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.322985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.323015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.323174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.323204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.323402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.323434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.323648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.323679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.323796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.323827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.323953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.323983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.324202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.324242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.324491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.324523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.324716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.324749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.324955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.324986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.325180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.325211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.325370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.325401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.325586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.325617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.325752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.325783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.325917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.325948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.326082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.326113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.326270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.326302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.326417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.326448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.326568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.326599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.637 [2024-07-15 08:03:49.326788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.637 [2024-07-15 08:03:49.326819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.637 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.327003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.327034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.327171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.327203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.327452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.327484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.327713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.327745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.328019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.328050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.328181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.328211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.328343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.328374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.328564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.328595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.328713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.328744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.328882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.328913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.329033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.329063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.329174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.329205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.329472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.329503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.329621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.329653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.329784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.329815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.329975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.330006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.330141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.330178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.330319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.330351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.330565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.330596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.330743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.330773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.330952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.330983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.331166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.331197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.331395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.331428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.331551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.331582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.331797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.331828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.332040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.332072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.332363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.332396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.332644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.332675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.332870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.332901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.333079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.333110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.333318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.333351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.333545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.333576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.916 [2024-07-15 08:03:49.333707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.916 [2024-07-15 08:03:49.333739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.916 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.333944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.333975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.334105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.334135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.334325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.334358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.334547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.334579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.334798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.334828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.334944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.334976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.335220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.335259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.335507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.335538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.335733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.335764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.336043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.336074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.336221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.336261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.336466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.336497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.336772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.336803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.337018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.337049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.337262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.337295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.337548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.337579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.337823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.337854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.337993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.338024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.338276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.338309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.338513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.338545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.338818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.338849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.339048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.339079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.339259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.339291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.339411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.339448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.339646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.339677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.339812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.339843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.340033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.340064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.340254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.340286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.340532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.340563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.340744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.340775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.341021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.341053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.341319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.341352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.341555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.341586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.341745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.341776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.341910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.341941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.342119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.342150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.342293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.342325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.342577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.342608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.342791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.342822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.343025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.343057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.343185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.343216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.343417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.343449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.343693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.343725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.343859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.343890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.344003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.344034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.344245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.344277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.344460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.344491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.344777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.344808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.345058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.345089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.345277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.345310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.345498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.345569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.345776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.345811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.346000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.346033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.346182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.346213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.346514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.346545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.346737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.346768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.346966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.346996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.347121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.347153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.347358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.347392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.347519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.347551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.347796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.347827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.348033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.348064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.348243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.348275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.348405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.348445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.348642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.348674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.348865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.348896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.349160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.349192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.349403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.349436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.349659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.349690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.917 [2024-07-15 08:03:49.349937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.917 [2024-07-15 08:03:49.349969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.917 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.350216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.350259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.350439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.350469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.350737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.350768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.350983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.351014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.351193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.351235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.351416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.351448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.351694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.351725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.351919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.351951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.352148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.352178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.352446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.352478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.352660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.352692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.352966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.352997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.353181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.353212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.353437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.353468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.353580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.353611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.353861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.353892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.354150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.354181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.354404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.354436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.354640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.354672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.354854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.354886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.355069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.355102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.355306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.355340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.355490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.355522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.355667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.355698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.355890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.355921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.356054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.356085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.356332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.356364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.356548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.356579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.356723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.356754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.357007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.357038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.357218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.357263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.357398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.357429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.357611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.357641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.357818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.357854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.358050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.358081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.358218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.358258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.358461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.358492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.358679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.358710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.358845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.358876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.359149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.359180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.359467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.359498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.359682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.359714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.359923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.359955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.360076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.360108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.360302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.360334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.360555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.360586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.360765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.360796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.361002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.361033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.361213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.361251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.361439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.361470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.361651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.361683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.361873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.361905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.362118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.362150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.362350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.362382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.362595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.362627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.362821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.362851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.363120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.363150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.363282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.363315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.363590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.363621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.363802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.363833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.364031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.364064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.364196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.364235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.364372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.364403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.364526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.364558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.364804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.364834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.918 [2024-07-15 08:03:49.365014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.918 [2024-07-15 08:03:49.365044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.918 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.365269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.365302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.365446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.365477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.365674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.365705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.365882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.365914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.366096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.366127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.366397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.366429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.366552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.366583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.366766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.366802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.367070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.367100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.367339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.367371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.367553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.367584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.367783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.367813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.367945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.367977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.368188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.368219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.368474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.368505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.368709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.368741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.368934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.368965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.369162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.369193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.369423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.369455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.369711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.369741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.369984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.370015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.370193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.370235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.370448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.370479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.370696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.370727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.370975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.371007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.371276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.371309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.371508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.371539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.371667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.371698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.371830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.371861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.372007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.372038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.372164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.372195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.372386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.372418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.372535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.372566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.372815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.372846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.372971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.373003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.373277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.373310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.373489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.373520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.373707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.373739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.373868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.373899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.374093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.374124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.374312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.374344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.374537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.374569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.374841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.374872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.375008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.375040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.375262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.375295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.375481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.375513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.375806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.375836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.376020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.376056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.376172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.376203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.376398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.376430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.376608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.376639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.376886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.376918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.377115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.377147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.377320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.377352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.377483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.377515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.377699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.377730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.377948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.919 [2024-07-15 08:03:49.377979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.919 qpair failed and we were unable to recover it. 00:28:04.919 [2024-07-15 08:03:49.378123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.378154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.378332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.378364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.378633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.378664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.378850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.378881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.379138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.379169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.379384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.379416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.379548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.379579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.379774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.379805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.379938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.379969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.380152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.380183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.380333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.380365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.380486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.380517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.380692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.380722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.380894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.380924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.381122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.381152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.381386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.381420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.381635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.381666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.381944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.381975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.382172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.382203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.382436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.382468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.382657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.382689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.382818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.382850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.382976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.383007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.383235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.383268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.383460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.383491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.383620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.383651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.383929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.383961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.384233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.384266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.384449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.384480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.384610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.384641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.384924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.384956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.385086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.385118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.385303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.385336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.385468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.385499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.385689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.385720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.385916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.385947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.386127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.386158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.386350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.386383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.386507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.386538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.386719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.386750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.386997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.387028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.387272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.387304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.387496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.387527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.387793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.387823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.388029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.388060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.388327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.388360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.388603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.388634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.388815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.388846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.389115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.389147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.389329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.389361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.389481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.389511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.389619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.389650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.389894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.389924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.390176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.390207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.390408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.390440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.390741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.390772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.390916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.390947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.391073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.391114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.391408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.391440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.391628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.391659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.391854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.391885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.392089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.920 [2024-07-15 08:03:49.392120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.920 qpair failed and we were unable to recover it. 00:28:04.920 [2024-07-15 08:03:49.392308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.392353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.392474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.392506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.392639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.392670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.392874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.392905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.393033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.393064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.393186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.393216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.393422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.393453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.393639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.393670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.393918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.393950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.394071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.394102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.394249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.394282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.394401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.394431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.394604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.394635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.394753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.394785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.394917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.394948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.395127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.395158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.395355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.395387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.395589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.395620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.395835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.395866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.396062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.396092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.396374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.396405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.396544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.396575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.396767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.396798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.396986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.397017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.397209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.397249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.397398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.397429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.397636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.397667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.397786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.397818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.398012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.398042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.398209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.398250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.398451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.398483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.398688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.398719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.398965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.398996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.399192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.399223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.399370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.399402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.399547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.399584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.399873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.399904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.400083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.400114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.400364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.400397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.400598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.400629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.400806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.400836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.401032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.401065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.401271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.401304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.401510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.401541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.401684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.401716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.401855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.401886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.402100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.402131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.402333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.402365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.402583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.402614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.402752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.402783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.402916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.402948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.403075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.403106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.403324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.403357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.403499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.403530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.403667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.403698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.403899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.921 [2024-07-15 08:03:49.403930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.921 qpair failed and we were unable to recover it. 00:28:04.921 [2024-07-15 08:03:49.404066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.404097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.404299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.404331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.404460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.404490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.404767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.404798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.404983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.405014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.405203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.405242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.405393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.405425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.405575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.405606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.405812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.405845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.406036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.406068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.406322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.406354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.406473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.406505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.406688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.406718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.406908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.406939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.407129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.407161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.407287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.407319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.407498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.407529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.407727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.407758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.407942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.407973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.408153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.408189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.408457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.408489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.408737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.408769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.408913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.408944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.409139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.409169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.409429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.409462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.409713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.409745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.409934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.409966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.410178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.410209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.410337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.410368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.410555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.410586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.410794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.410825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.410947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.410978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.411160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.411191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.411411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.411443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.411663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.411695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.411827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.411857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.411986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.412017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.412216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.412258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.412441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.412472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.412694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.412726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.412850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.412882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.413102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.413134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.413334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.413367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.413550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.413582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.413726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.413758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.413885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.413917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.414055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.414087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.414266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.414298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.414483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.414514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.414644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.414675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.414856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.414887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.415004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.415036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.415281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.415314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.415503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.415534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.415721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.415752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.415951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.415982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.416164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.416195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.416399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.416432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.416687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.416719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.416900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.416937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.417206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.417269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.922 qpair failed and we were unable to recover it. 00:28:04.922 [2024-07-15 08:03:49.417459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.922 [2024-07-15 08:03:49.417490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.417620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.417652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.417847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.417879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.418062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.418093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.418245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.418277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.418526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.418557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.418827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.418858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.419105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.419137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.419253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.419285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.419532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.419564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.419830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.419862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.420072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.420103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.420254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.420286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.420508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.420541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.420809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.420840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.421028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.421059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.421196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.421234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.421419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.421450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.421647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.421678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.421814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.421846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.422091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.422122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.422304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.422335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.422582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.422614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.422800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.422832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.422956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.422987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.423193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.423234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.423352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.423383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.423561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.423593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.423812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.423842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.423988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.424020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.424150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.424181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.424328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.424360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.424589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.424620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.424820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.424851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.424964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.424995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.425127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.425158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.425363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.425396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.425529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.425560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.425743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.425780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.425976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.426008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.426219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.426260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.426399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.426431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.426616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.426648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.426835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.426867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.427067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.427099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.427234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.427266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.427543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.427575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.427707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.427739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.427870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.427901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.428080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.428111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.428386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.428418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.428624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.428655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.428772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.428803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.429019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.429052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.429261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.429295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.429473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.429505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.429697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.429729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.429975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.430006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.430118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.430149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.430260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.430291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.430489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.430520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.430696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.923 [2024-07-15 08:03:49.430728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.923 qpair failed and we were unable to recover it. 00:28:04.923 [2024-07-15 08:03:49.430854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.430885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.431077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.431108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.431289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.431321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.431545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.431576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.431841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.431872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.432063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.432095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.432286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.432318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.432567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.432599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.432812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.432843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.432970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.433001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.433188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.433219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.433458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.433490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.433686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.433718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.433851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.433883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.434101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.434132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.434339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.434372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.434576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.434617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.434742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.434774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.434925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.434956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.435128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.435158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.435405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.435437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.435619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.435649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.435769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.435800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.435923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.435954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.436201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.436240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.436375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.436406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.436588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.436618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.436841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.436872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.437080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.437112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.437311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.437343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.437478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.437510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.437690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.437721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.437964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.437996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.438132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.438162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.438288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.438321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.438569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.438601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.438801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.438832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.439024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.439056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.439242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.439275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.439409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.439441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.439718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.439750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.440028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.440060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.440265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.440297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.440503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.440535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.440661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.440693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.440883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.440915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.441163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.441194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.441334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.441366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.441503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.441535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.441672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.441703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.441850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.441881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.442149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.442180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.442388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.442420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.442537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.442567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.442698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.442730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.442864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.442895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.443090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.443126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.443310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.924 [2024-07-15 08:03:49.443343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.924 qpair failed and we were unable to recover it. 00:28:04.924 [2024-07-15 08:03:49.443469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.443500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.443692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.443724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.443971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.444003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.444252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.444283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.444482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.444514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.444645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.444676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.444800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.444831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.444940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.444970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.445163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.445195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.445421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.445494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.445701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.445737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.445928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.445961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.446192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.446240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.446386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.446417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.446563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.446594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.446859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.446891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.447076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.447108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.447257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.447290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.447471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.447501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.447800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.447831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.447958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.447989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.448168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.448198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.448326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.448359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.448491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.448523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.448706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.448736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.448887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.448923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.449123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.449154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.449273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.449306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.449424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.449463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.449598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.449629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.449821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.449852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.450038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.450070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.450254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.450286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.450489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.450521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.450646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.450677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.450894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.450926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.451137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.451168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.451435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.451468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.451599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.451637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.451875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.451907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.452053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.452084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.452218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.452259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.452407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.452439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.452561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.452591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.452704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.452735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.452927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.452959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.453206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.453261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.453509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.453540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.453675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.453707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.453893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.453923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.454165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.454196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.454386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.454418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.454648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.454680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.454931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.454963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.455158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.455189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.455379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.455411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.455617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.455648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.455921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.455952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.456148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.456179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.456382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.456414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.456597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.456628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.456826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.456857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.457133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.457164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.457410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.457444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.925 [2024-07-15 08:03:49.457646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.925 [2024-07-15 08:03:49.457678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.925 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.457928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.457960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.458145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.458176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.458380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.458412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.458608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.458639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.458820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.458852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.459044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.459075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.459188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.459219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.459414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.459445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.459709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.459740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.459914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.459945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.460148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.460178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.460325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.460357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.460499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.460530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.460801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.460836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.461039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.461069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.461200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.461241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.461362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.461393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.461591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.461622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.461744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.461776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.461965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.461996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.462215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.462256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.462451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.462482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.462664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.462695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.462810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.462841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.463020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.463051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.463240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.463273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.463466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.463498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.463696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.463727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.463935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.463967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.464244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.464276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.464470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.464502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.464735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.464766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.464987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.465018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.465296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.465329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.465461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.465493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.465617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.465648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.465785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.465816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.466022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.466053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.466250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.466281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.466400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.466432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.466686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.466717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.466938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.466968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.467157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.467188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.467315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.467348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.467531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.467562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.467741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.467772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.467911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.467942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.468123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.468154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.468349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.468381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.468587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.468618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.468818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.468849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.469024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.469055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.469185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.469216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.469420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.469456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.469571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.469602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.469725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.469755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.469863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.469895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.470165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.470195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.470409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.470441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.470642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.470673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.926 [2024-07-15 08:03:49.470946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.926 [2024-07-15 08:03:49.470977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.926 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.471236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.471268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.471415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.471447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.471643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.471674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.471882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.471914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.472040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.472071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.472198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.472238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.472444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.472475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.472611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.472643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.472837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.472869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.473053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.473085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.473269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.473301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.473427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.473458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.473600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.473631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.473819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.473850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.474049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.474079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.474349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.474382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.474562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.474593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.474840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.474870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.475067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.475098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.475247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.475279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.475455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.475486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.475669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.475700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.475913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.475944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.476126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.476157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.476293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.476325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.476575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.476606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.476850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.476881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.477091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.477122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.477244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.477276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.477478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.477509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.477779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.477811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.478037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.478068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.478191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.478237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.478446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.478478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.478725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.478757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.478954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.478985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.479243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.479274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.479525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.479556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.479729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.479760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.479959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.479992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.480268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.480301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.480429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.480461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.480643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.480674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.480955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.480986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.481107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.481138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.481397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.481429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.481606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.481637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.481884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.481915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.482105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.482135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.482252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.482284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.482404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.482434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.482562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.482594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.482864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.482895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.483074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.483105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.483309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.483341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.927 [2024-07-15 08:03:49.483524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.927 [2024-07-15 08:03:49.483556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.927 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.483682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.483713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.483912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.483943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.484165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.484196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.484433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.484465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.484643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.484674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.484795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.484826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.485072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.485104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.485306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.485339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.485518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.485549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.485799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.485829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.485960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.485992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.486110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.486141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.486427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.486459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.486642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.486674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.486842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.486874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.487070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.487101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.487267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.487304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.487491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.487521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.487734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.487764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.487893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.487923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.488169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.488200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.488332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.488364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.488606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.488636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.488882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.488914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.489046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.489077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.489194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.489236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.489365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.489397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.489614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.489646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.489770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.489801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.489939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.489969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.490175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.490207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.490492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.490524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.490719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.490751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.490946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.490977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.491095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.491127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.491331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.491363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.491504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.491534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.491665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.491698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.491949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.491980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.492171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.492202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.492375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.492406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.492596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.492627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.492822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.492853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.493005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.493037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.493246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.493279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.493409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.493440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.493567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.493599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.493784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.493816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.493945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.493976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.494176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.494207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.494329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.494361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.494504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.494534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.494723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.494754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.494950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.494981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.495105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.495136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.495342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.495375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.495510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.495546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.495661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.495692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.495877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.495907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.496108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.496139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.496339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.496390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.496539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.496570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.496706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.496736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.496850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.496882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.497027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.497058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.497253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.497285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.497477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.497507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.497689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.497720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.497853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.497884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.498002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.498033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.498232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.498265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.928 qpair failed and we were unable to recover it. 00:28:04.928 [2024-07-15 08:03:49.498392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.928 [2024-07-15 08:03:49.498423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.498601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.498632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.498824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.498855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.499052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.499084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.499206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.499247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.499373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.499406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.499521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.499553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.499696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.499727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.499869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.499901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.500159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.500190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.500394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.500425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.500606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.500637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.500782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.500819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.500953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.500983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.501163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.501194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.501478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.501511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.501637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.501668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.501849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.501881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.502039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.502070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.502255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.502287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.502479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.502511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.502693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.502723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.502842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.502873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.502997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.503028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.503316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.503347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.503529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.503559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.503677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.503708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.503904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.503935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.504115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.504146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.504346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.504378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.504570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.504600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.504727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.504758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.504953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.504984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.505105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.505136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.505318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.505349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.505549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.505581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.505779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.505810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.505995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.506026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.506143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.506174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.506379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.506411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.506551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.506583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.506779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.506810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.506986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.507018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.507159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.507190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.507327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.507360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.507476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.507507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.507700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.507731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.507934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.507964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.508179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.508210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.508362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.508393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.508526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.508557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.508694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.508725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.508927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.508963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.509090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.509121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.509252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.509285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.509468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.509499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.509702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.509733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.509861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.509892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.510069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.510100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.510391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.510424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.510552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.510582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.510708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.510739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.510987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.511018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.511195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.511233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.511438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.511469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.511591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.511622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.511848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.511879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.512070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.512101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.512215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.512267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.512452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.512482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.512753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.929 [2024-07-15 08:03:49.512785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.929 qpair failed and we were unable to recover it. 00:28:04.929 [2024-07-15 08:03:49.512988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.513019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.513315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.513348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.513554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.513584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.513783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.513815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.514060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.514092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.514271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.514302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.514511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.514542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.514669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.514701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.514899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.514930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.515111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.515142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.515272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.515303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.515516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.515547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.515665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.515696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.515890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.515920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.516131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.516163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.516301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.516333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.516450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.516481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.516727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.516758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.516972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.517004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.517185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.517215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.517414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.517446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.517700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.517735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.517951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.517983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.518178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.518209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.518350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.518382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.518581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.518611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.518729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.518760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.519013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.519044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.519260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.519291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.519567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.519598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.519723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.519753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.520011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.520042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.520248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.520282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.520465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.520496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.520679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.520709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.520916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.520947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.521072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.521103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.521238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.521270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.521464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.521495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.521715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.521746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.521875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.521906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.522086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.522117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.522329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.522361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.522493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.522525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.522721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.522751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.522936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.522967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.523089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.523120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.523307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.523340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.523591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.523623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.523739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.523770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.523896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.523928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.524107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.524138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.524272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.524303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.524505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.524537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.524722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.524753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.524929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.524959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.525104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.525135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.525324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.525356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.525624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.525655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.525766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.525797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.525909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.525940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.526068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.526109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.526357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.526390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.526605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.526636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.526761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.526792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.527061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.527092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.527360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.527392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.527613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.527645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.527854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.527885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.528149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.528180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.930 [2024-07-15 08:03:49.528391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.930 [2024-07-15 08:03:49.528423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.930 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.528625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.528656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.528923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.528955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.529103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.529134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.529406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.529438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.529631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.529662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.529776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.529807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.529953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.529983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.530237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.530268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.530492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.530524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.530665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.530696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.530896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.530927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.531170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.531201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.531419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.531451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.531695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.531726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.531861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.531892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.532024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.532055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.532242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.532274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.532414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.532446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.532657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.532688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.532864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.532895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.533076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.533107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.533298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.533330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.533467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.533498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.533626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.533658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.533843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.533874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.533980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.534011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.534203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.534243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.534510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.534541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.534668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.534700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.534957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.534988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.535236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.535273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.535400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.535432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.535543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.535573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.535822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.535853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.536035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.536066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.536192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.536223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.536517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.536548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.536817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.536847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.537039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.537070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.537247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.537279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.537427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.537458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.537655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.537687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.537887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.537918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.538163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.538195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.538321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.538353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.538597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.538628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.538906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.538937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.539136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.539167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.539423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.539455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.539583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.539615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.539803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.539833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.539972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.540003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.540126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.540157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.540402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.540433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.540677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.540708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.540831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.540862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.541040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.541070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.541188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.541219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.541364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.541396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.541651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.541681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.541952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.541983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.542174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.542206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.542393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.542424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.542670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.542701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.542843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.542874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.543121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.543152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.543359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.543391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.543590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.543621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.543746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.543777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.543953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.543984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.544177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.931 [2024-07-15 08:03:49.544212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.931 qpair failed and we were unable to recover it. 00:28:04.931 [2024-07-15 08:03:49.544426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.544458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.544637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.544668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.544835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.544866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.545040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.545071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.545331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.545363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.545555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.545587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.545832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.545863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.546130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.546161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.546289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.546321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.546501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.546533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.546730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.546761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.546935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.546966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.547209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.547261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.547393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.547424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.547631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.547662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.547854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.547885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.548081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.548111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.548291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.548324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.548515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.548546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.548749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.548780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.548976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.549007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.549130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.549162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.549354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.549386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.549653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.549684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.549808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.549840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.549968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.549998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.550249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.550281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.550408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.550439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.550637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.550667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.550912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.550944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.551071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.551101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.551373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.551405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.551585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.551616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.551806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.551836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.552083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.552115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.552390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.552423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.552625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.552656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.552847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.552878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.553062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.553093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.553362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.553399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.553574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.553606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.553793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.553824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.554065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.554096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.554236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.554267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.554511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.554543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.554722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.554753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.554958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.554988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.555110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.555141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.555267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.555299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.555565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.555597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.555735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.555766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.555891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.555922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.556051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.556082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.556353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.556386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.556504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.556535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.556728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.932 [2024-07-15 08:03:49.556759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.932 qpair failed and we were unable to recover it. 00:28:04.932 [2024-07-15 08:03:49.556874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.556906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.557111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.557143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.557387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.557418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.557602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.557634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.557829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.557860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.558128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.558159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.558349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.558380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.558568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.558599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.558781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.558812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.558941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.558972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.559220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.559259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.559394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.559426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.559574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.559605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.559794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.559825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.559962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.559993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.560184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.560215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.560364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.560395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.560575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.560606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.560747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.560779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.560915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.560946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.561209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.561251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.561370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.561402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.561595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.561625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.561825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.561861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.562052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.562083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.562199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.562239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.562417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.562449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.562630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.562661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.562850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.562881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.563007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.563037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.563296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.563328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.563457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.563488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.563733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.563765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.563896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.563927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.564184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.564215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.564420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.564452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.564582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.564613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.564808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.564839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.565042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.565074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.565282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.565314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.565434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.565465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.565581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.565612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.565742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.565773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.565911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.565942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.566045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.566075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.566210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.566248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.566378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.566411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.566531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.566562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.566760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.566791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.566909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.566940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.567131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.567163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.567271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.567304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.567512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.567542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.567736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.567767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.567946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.567977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.568244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.568276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.568497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.568528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.568671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.568702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.568894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.568925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.569194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.569254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.569503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.569534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.569723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.569754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.569869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.569901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.570027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.570067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.570249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.570281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.570555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.570586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.570709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.570739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.570871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.570902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.571165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.571196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.571449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.571481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.571748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.933 [2024-07-15 08:03:49.571779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.933 qpair failed and we were unable to recover it. 00:28:04.933 [2024-07-15 08:03:49.571969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.571999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.572202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.572256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.572450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.572481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.572662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.572692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.572807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.572838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.573082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.573112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.573364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.573396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.573534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.573566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.573759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.573789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.573968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.573999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.574189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.574219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.574411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.574442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.574692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.574723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.574853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.574885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.574994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.575024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.575322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.575354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.575481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.575512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.575693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.575723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.575850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.575881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.576203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.576241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.576434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.576465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.576674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.576704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.576841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.576872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.577066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.577097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.577340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.577372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.577617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.577648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.577868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.577899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.578033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.578064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.578206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.578246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.578444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.578475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.578725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.578756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.578938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.578969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.579148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.579184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.579386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.579418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.579549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.579580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.579825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.579855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.579980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.580011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.580206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.580267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.580462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.580492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.580605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.580636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.580879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.580910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.581050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.581080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.581285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.581318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.581499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.581530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.581713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.581745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.581867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.581898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.582038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.582070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.582269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.582300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.582544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.582575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.582690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.582721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.582901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.582932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.583048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.583079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.583353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.583385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.583569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.583600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.583845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.583875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.584001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.584033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.584159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.584190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.584377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.584409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.584597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.584628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.584822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.584853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.585097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.585128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.585327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.585359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.934 [2024-07-15 08:03:49.585561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.934 [2024-07-15 08:03:49.585592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.934 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.585779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.585809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.586055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.586086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.586254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.586286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.586470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.586502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.586642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.586672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.586875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.586906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.587095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.587126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.587325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.587356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.587603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.587634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.587885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.587920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.588049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.588080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.588214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.588253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.588437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.588468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.588712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.588745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.588987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.589017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.589115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.589146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.589342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.589376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.589579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.589610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.589730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.589761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.589954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.589985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.590103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.590134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.590311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.590343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.590522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.590553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.590753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.590785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.590916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.590947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.591218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.591257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.591505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.591536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.591724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.591755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.591885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.591916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.592115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.592146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.592262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.592294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.592548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.592579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.592771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.592801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.592981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.593012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.593215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.593257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.593455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.593486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.593624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.593656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.593785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.593817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.593955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.593986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.594267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.594300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.594524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.594555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.594829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.594860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.594989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.595019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.595144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.595174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.595376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.595407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.595577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.595608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.595792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.595823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.596099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.596131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.596309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.596342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.596610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.596646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.596840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.596872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.597080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.597111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.597243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.597274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.597481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.597512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.597686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.597716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.597990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.598021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.598223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.598261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.598524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.598555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.598766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.598797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.598978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.599010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.599279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.599311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.599517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.599548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.599746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.599778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.599983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.600013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.600268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.600301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.935 qpair failed and we were unable to recover it. 00:28:04.935 [2024-07-15 08:03:49.600560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.935 [2024-07-15 08:03:49.600592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.600774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.600805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.600933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.600964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.601088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.601119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.601364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.601396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.601516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.601547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.601656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.601687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.601876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.601906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.602037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.602068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.602260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.602293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.602478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.602509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.602694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.602726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.602992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.603023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.603214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.603254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.603364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.603395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.603563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.603593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.603767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.603798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.603984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.604014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.604237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.604269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.604384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.604415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.604605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.604637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.604758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.604789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.604966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.604997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.605267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.605298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.605483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.605520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.605764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.605795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.605975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.606006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.606182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.606213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.606429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.606461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.606651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.606681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.606889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.606920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.607197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.607235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.607442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.607473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.607582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.607613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.607859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.607890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.608156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.608187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.608382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.608413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.608601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.608631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.608898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.608930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.609125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.609156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.609267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.609299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.609486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.609518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.609787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.609818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.610035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.610066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.610263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.610295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.610415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.610447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.610701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.610732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.610857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.610888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.611085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.611116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.611311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.611343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.611545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.611576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.611699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.611736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.611863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.611895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.612154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.612185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.612410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.612442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.612637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.612667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.612863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.612894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.613102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.613132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.613403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.613435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.613575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.613606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.613812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.613843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.614021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.614053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.614176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.614207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.614339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.614371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.614548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.614579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.614837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.614869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.615078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.615108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.615288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.615320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.615450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.615481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.615601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.615631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.615833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.615864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.616077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.936 [2024-07-15 08:03:49.616109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.936 qpair failed and we were unable to recover it. 00:28:04.936 [2024-07-15 08:03:49.616252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.616283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.616430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.616461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.616583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.616614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.616797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.616827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.617074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.617105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.617284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.617316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.617448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.617479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.617605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.617636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.617785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.617816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.618082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.618113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.618275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.618308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.618572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.618604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.618834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.618865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.618997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.619028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.619208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.619248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.619385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.619416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.619594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.619625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.619740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.619771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.619882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.619914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.620169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.620205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.620353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.620385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.620586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.620618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.620795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.620826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.621023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.621055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.621248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.621281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.621453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.621484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.621627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.621658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.621849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.621880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.621997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.622027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.622282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.622314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.622453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.622484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.622671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.622702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.622836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.622868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.623122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.623153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.623334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.623365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.623582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.623613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.623793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.623823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.623942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.623973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.624102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.624133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.624317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.624349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.624456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.624487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.624621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.624652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.624854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.624885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.625153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.625185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.625400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.625433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.625613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.625644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.625933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.625965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.626162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.626193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.626409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.626441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.626711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.626741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.626936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.626967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.627185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.627216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.627429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.627461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.627662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.627694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.627911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.627942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.628197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.628236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.628414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.628446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.628652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.628683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.628800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.628831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.628969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.629006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.629213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.629254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.629389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.629420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.629600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.629631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.629747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.629777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.629952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.937 [2024-07-15 08:03:49.629983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.937 qpair failed and we were unable to recover it. 00:28:04.937 [2024-07-15 08:03:49.630236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.630267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.630448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.630479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.630628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.630659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.630850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.630881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.631069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.631100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.631352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.631384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.631573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.631604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.631805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.631836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.632128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.632159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.632356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.632389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.632602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.632634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.632824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.632855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.633033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.633064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.633200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.633239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.633419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.633450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.633641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.633672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.633794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.633826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.634006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.634036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.634215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.634272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.634451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.634482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.634694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.634725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.634921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.634952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.635081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.635112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.635240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.635273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.635469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.635500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.635693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.635725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.635841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.635871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.635990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.636022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.636313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.636345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.636523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.636554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.636681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.636712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.636980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.637012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.637216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.637253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.637398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.637430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.637697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.637734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.638002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.638033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.638234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.638267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.638537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.638569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.638768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.638799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.639072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.639102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.639378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.639410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.639541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.639572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.639764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.639795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.640034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.640065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.640190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.640222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.640409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.640440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.640567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.640598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.640823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.640854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.640989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.641020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.641233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.641265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.641455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.641486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.641680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.641712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.641894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.641925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.642137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.642168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.642357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.642389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.642588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.642619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.642812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.642843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.642960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.642991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.643209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.643270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.643490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.643521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.643660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.643691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.643902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.643933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.644081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.644111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.644247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.644279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.644459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.644489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.644619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.644650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.644839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.644870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.645066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.645096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.645220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.645258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.645389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.645420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-07-15 08:03:49.645666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.938 [2024-07-15 08:03:49.645697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.939 qpair failed and we were unable to recover it. 00:28:04.939 [2024-07-15 08:03:49.645876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.939 [2024-07-15 08:03:49.645906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.939 qpair failed and we were unable to recover it. 00:28:04.939 [2024-07-15 08:03:49.646137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.939 [2024-07-15 08:03:49.646168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.939 qpair failed and we were unable to recover it. 00:28:04.939 [2024-07-15 08:03:49.646300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.939 [2024-07-15 08:03:49.646332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.939 qpair failed and we were unable to recover it. 00:28:04.939 [2024-07-15 08:03:49.646582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.939 [2024-07-15 08:03:49.646617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.939 qpair failed and we were unable to recover it. 00:28:04.939 [2024-07-15 08:03:49.646800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.939 [2024-07-15 08:03:49.646831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.939 qpair failed and we were unable to recover it. 00:28:04.939 [2024-07-15 08:03:49.647019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.939 [2024-07-15 08:03:49.647050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.939 qpair failed and we were unable to recover it. 00:28:04.939 [2024-07-15 08:03:49.647182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.939 [2024-07-15 08:03:49.647213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.939 qpair failed and we were unable to recover it. 00:28:04.939 [2024-07-15 08:03:49.647419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.939 [2024-07-15 08:03:49.647451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.939 qpair failed and we were unable to recover it. 00:28:04.939 [2024-07-15 08:03:49.647634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.939 [2024-07-15 08:03:49.647665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.939 qpair failed and we were unable to recover it. 00:28:04.939 [2024-07-15 08:03:49.647808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.939 [2024-07-15 08:03:49.647838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.939 qpair failed and we were unable to recover it. 00:28:04.939 [2024-07-15 08:03:49.647975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.939 [2024-07-15 08:03:49.648006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:04.939 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.648198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.648238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.648400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.648432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.648702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.648734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.648928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.648959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.649171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.649202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.649410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.649441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.649641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.649672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.649804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.649834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.650011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.650042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.650289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.650323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.650513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.650543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.650731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.650762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.651018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.651050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.651317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.651348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.651540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.651571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.651769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.651801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.652020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.652051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.652199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.652236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.652510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.652541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.652771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.652802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.652950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.652981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.653096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.653127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.653314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.653346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.653537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.653568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.653775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.653805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.653982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.216 [2024-07-15 08:03:49.654013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.216 qpair failed and we were unable to recover it. 00:28:05.216 [2024-07-15 08:03:49.654203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.654244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.654422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.654452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.654715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.654746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.654914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.654944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.655133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.655164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.655360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.655391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.655584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.655620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.655797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.655828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.656018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.656049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.656245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.656277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.656466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.656498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.656687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.656718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.656923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.656953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.657199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.657241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.657439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.657470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.657612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.657644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.657836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.657867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.658054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.658084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.658303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.658335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.658594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.658625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.658849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.658880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.659070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.659100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.659301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.659333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.659598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.659629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.659768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.659800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.660091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.660122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.660305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.660336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.660607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.660638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.660833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.660864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.660973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.661004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.661259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.661290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.661436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.661467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.661713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.661743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.661991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.662022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.662139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.662170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.662428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.662460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.662673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.662703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.662899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.662930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.663120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.663151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.663284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.217 [2024-07-15 08:03:49.663315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.217 qpair failed and we were unable to recover it. 00:28:05.217 [2024-07-15 08:03:49.663499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.663530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.663745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.663777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.664044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.664075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.664329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.664360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.664472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.664502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.664769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.664800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.664981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.665022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.665239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.665271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.665464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.665495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.665598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.665629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.665760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.665791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.665991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.666022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.666307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.666339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.666534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.666565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.666827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.666858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.667002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.667032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.667278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.667311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.667447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.667478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.667665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.667696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.667813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.667844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.668090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.668122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.668317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.668348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.668546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.668577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.668764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.668797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.669056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.669087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.669275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.669306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.669516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.669548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.669731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.669762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.669891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.669921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.670115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.670145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.670326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.670357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.670625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.670656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.670852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.670882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.671081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.671112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.671305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.671337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.671552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.671584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.671760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.671791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.671970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.672000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.672196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.672234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.672366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.672396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.218 qpair failed and we were unable to recover it. 00:28:05.218 [2024-07-15 08:03:49.672589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.218 [2024-07-15 08:03:49.672620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.672887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.672919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.673100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.673131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.673256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.673288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.673485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.673516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.673655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.673686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.673866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.673902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.674044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.674074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.674325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.674358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.674560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.674591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.674835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.674866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.674980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.675011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.675190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.675219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.675415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.675446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.675639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.675670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.675782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.675813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.676083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.676114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.676256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.676288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.676554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.676585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.676699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.676729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.676861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.676892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.677136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.677167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.677346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.677377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.677507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.677538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.677685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.677716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.677910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.677942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.678214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.678253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.678374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.678405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.678604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.678635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.678837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.678868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.679122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.679153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.679400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.679432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.679702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.679733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.679939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.679971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.680171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.680202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.680337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.680369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.680587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.680618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.680815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.680846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.681116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.681147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.681273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.681306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.219 [2024-07-15 08:03:49.681554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.219 [2024-07-15 08:03:49.681584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.219 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.681858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.681889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.682011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.682041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.682263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.682296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.682491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.682522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.682648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.682678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.682873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.682909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.683088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.683119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.683332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.683363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.683568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.683599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.683742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.683774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.683954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.683984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.684117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.684148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.684324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.684356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.684599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.684629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.684806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.684837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.685082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.685112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.685359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.685392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.685574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.685605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.685851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.685882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.686159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.686191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.686469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.686501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.686697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.686729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.686862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.686893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.687072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.687102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.687294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.687325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.687541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.687571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.687700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.687731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.687862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.687893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.688140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.688171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.688373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.688405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.688592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.688623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.688807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.688837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.689055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.689086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-07-15 08:03:49.689330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.220 [2024-07-15 08:03:49.689363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.689486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.689517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.689710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.689741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.689870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.689901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.690092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.690124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.690305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3414117 Killed "${NVMF_APP[@]}" "$@" 00:28:05.221 [2024-07-15 08:03:49.690337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.690469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.690501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.690685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.690716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.690883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.690914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 08:03:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:05.221 [2024-07-15 08:03:49.691094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.691125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.691302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 08:03:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:05.221 [2024-07-15 08:03:49.691334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.691460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.691491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 08:03:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:05.221 [2024-07-15 08:03:49.691738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.691769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 08:03:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:05.221 [2024-07-15 08:03:49.692025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.692057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 08:03:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:05.221 [2024-07-15 08:03:49.692195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.692233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.692423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.692454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.692649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.692681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.692878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.692909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.693206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.693248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.693543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.693574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.693760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.693791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.693908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.693939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.694135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.694165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.694283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.694321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.694592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.694623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.694835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.694865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.695060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.695091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.695300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.695333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.695601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.695632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.695841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.695871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.695995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.696025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.696273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.696303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.696549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.696580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.696775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.696805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.697079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.697110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.697355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.697387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.697566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.697597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.697797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.697828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-07-15 08:03:49.698025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.221 [2024-07-15 08:03:49.698057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.698263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.698294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.698493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.698524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.698651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.698682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.698867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 08:03:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3414955 00:28:05.222 [2024-07-15 08:03:49.698898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.699123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.699155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 08:03:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3414955 00:28:05.222 [2024-07-15 08:03:49.699281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.699313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 08:03:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:05.222 [2024-07-15 08:03:49.699512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.699544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 08:03:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3414955 ']' 00:28:05.222 [2024-07-15 08:03:49.699841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 08:03:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.222 [2024-07-15 08:03:49.699873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.700014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.700044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 08:03:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:05.222 [2024-07-15 08:03:49.700252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.700284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 08:03:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.222 [2024-07-15 08:03:49.700492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.700524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.700646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.700678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 08:03:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:05.222 [2024-07-15 08:03:49.700875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.700906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 08:03:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:05.222 [2024-07-15 08:03:49.701092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.701123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.701314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.701347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.701543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.701574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.701751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.701782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.701977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.702007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.702187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.702217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.702346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.702379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.702638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.702670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.702821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.702854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.703043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.703074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.703204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.703246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.703365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.703396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.703610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.703641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.703886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.703918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.704113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.704143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.704268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.704303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.704495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.704527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.704775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.704807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.705003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.705034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.705256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.705288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.705478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.705510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.705781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.705812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-07-15 08:03:49.706026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.222 [2024-07-15 08:03:49.706058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.706185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.706216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.706511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.706542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.706808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.706839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.707053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.707086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.707333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.707364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.707612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.707643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.707837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.707868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.708138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.708168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.708366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.708398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.708521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.708551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.708798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.708829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.709016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.709047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.709246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.709278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.709465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.709496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.709743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.709774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.709953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.709984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.710116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.710148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.710336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.710368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.710491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.710522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.710789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.710821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.710942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.710973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.711159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.711190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.711320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.711352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.711463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.711494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.711704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.711736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.711916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.711948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.712146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.712178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.712394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.712428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.712686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.712717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.712912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.712943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.713077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.713108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.713300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.713332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.713527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.713557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.713750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.713780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.713994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.714025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.714205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.714244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.714373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.714405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.714681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.714717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.714845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.714876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.715025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.223 [2024-07-15 08:03:49.715055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.223 qpair failed and we were unable to recover it. 00:28:05.223 [2024-07-15 08:03:49.715252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.715283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.715419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.715450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.715581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.715612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.715793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.715824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.716014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.716045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.716154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.716185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.716438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.716469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.716610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.716640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.716836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.716867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.717045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.717075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.717261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.717294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.717434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.717465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.717641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.717671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.717917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.717947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.718077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.718108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.718222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.718261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.718505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.718537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.718647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.718678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.718922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.718954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.719076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.719107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.719356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.719387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.719510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.719540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.719716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.719747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.719933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.719963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.720103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.720135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.720338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.720370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.720561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.720592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.720773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.720804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.721017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.721048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.721239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.721271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.721575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.721606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.721751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.721782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.721975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.722006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.722147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.722179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.722380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.722413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.722680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.224 [2024-07-15 08:03:49.722710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.224 qpair failed and we were unable to recover it. 00:28:05.224 [2024-07-15 08:03:49.722890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.722921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.723164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.723200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.723361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.723392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.723506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.723537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.723810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.723841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.723950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.723981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.724114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.724145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.724276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.724308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.724487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.724518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.724643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.724675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.724866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.724897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.725079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.725109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.725359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.725391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.725499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.725530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.725710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.725741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.726003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.726033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.726217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.726255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.726452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.726483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.726751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.726782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.726915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.726946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.727073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.727104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.727345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.727376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.727504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.727535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.727654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.727684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.727816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.727847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.728043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.728074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.728364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.728396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.728594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.728626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.728806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.728837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.729017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.729048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.729261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.729294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.729436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.729468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.729647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.729678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.729923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.729954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.730153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.730184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.730457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.730490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.730734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.730765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.730963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.730993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.731119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.731150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.225 qpair failed and we were unable to recover it. 00:28:05.225 [2024-07-15 08:03:49.731273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.225 [2024-07-15 08:03:49.731304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.731548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.731579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.731785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.731822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.732092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.732123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.732253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.732285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.732472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.732504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.732681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.732712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.732931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.732963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.733081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.733112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.733212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.733253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.733383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.733414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.733557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.733589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.733856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.733886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.734083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.734115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.734305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.734337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.734472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.734503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.734639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.734671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.734804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.734835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.735030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.735060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.735261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.735293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.735471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.735502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.735678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.735708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.735953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.735985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.736163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.736193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.736519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.736589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.736743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.736777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.736975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.737007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.737301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.737336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.737523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.737554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.737726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.737796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.738000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.738036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.738251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.738285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.738464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.738495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.738672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.738704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.738950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.738981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.739165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.739196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.739405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.739440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.739552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.739584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.739831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.739861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.740044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.740075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.226 [2024-07-15 08:03:49.740273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.226 [2024-07-15 08:03:49.740305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.226 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.740504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.740535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.740666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.740703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.740978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.741009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.741190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.741222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.741484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.741515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.741635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.741666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.741937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.741968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.742190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.742220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.742379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.742411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.742602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.742634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.742767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.742798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.742982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.743013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.743202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.743242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.743435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.743465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.743585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.743615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.743838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.743869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.744141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.744172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.744378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.744412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.744603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.744634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.744808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.744839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.744956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.744987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.745252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.745285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.745461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.745492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.745685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.745716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.745956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.745987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.746115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.746146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.746327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.746360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.746506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.746537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.746820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.746858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.746994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.747025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.747244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.747277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.747421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.747453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.747648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.747679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.747947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.747978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.748245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.748277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.748409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.748440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.748623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.748654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.748784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.748816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.749098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.749129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.749343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.227 [2024-07-15 08:03:49.749375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.227 qpair failed and we were unable to recover it. 00:28:05.227 [2024-07-15 08:03:49.749506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.749538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.749691] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:28:05.228 [2024-07-15 08:03:49.749733] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:05.228 [2024-07-15 08:03:49.749753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.749783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.749987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.750015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.750312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.750341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.750592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.750623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.750812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.750842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.751027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.751059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.751279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.751311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.751582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.751613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.751813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.751845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.752128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.752160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.752306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.752338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.752531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.752562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.752738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.752770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.752825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc9000 (9): Bad file descriptor 00:28:05.228 [2024-07-15 08:03:49.753036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.753071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.753342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.753378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.753501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.753533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.753671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.753701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.753920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.753951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.754145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.754176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.754465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.754497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.754691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.754722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.754863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.754894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.755091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.755122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.755254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.755286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.755580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.755611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.755802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.755833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.755974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.756005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.756251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.756282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.756476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.756507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.756705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.756735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.756868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.756899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.757110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.757141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.757267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.757300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.757496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.757527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.757721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.757752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.757932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.757963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.758167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.228 [2024-07-15 08:03:49.758199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.228 qpair failed and we were unable to recover it. 00:28:05.228 [2024-07-15 08:03:49.758400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.758432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.758619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.758650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.758832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.758870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.759118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.759150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.759291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.759323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.759508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.759540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.759729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.759760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.759884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.759915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.760193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.760233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.760425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.760457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.760581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.760612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.760806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.760838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.761004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.761035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.761216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.761258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.761378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.761409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.761576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.761607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.761861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.761892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.762087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.762119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.762305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.762337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.762600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.762632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.762743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.762775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.763024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.763056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.763241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.763273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.763522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.763553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.763681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.763712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.763915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.763945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.764131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.764162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.764288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.764320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.764460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.764491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.764694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.764725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.764849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.764881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.765003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.765035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.765212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.765253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.765460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.765491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.229 [2024-07-15 08:03:49.765668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.229 [2024-07-15 08:03:49.765700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.229 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.765893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.765925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.766052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.766082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.766261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.766293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.766507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.766539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.766679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.766710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.766842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.766873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.767094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.767125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.767324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.767360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.767604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.767635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.767772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.767803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.767935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.767966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.768143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.768174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.768310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.768342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.768549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.768579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.768786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.768816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.769014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.769045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.769257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.769290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.769584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.769615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.769861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.769892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.770019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.770050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.770327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.770359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.770555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.770587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.770775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.770806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.770931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.770962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.771152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.771183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.771383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.771415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.771589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.771621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.771748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.771779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.772029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.772059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.772192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.772223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.772432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.772463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.772641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.772673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.772858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.772888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.773132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.773162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.773420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.773453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.773645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.773676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.773802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.773833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.774116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.774148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.774341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.774373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.774508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.230 [2024-07-15 08:03:49.774540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.230 qpair failed and we were unable to recover it. 00:28:05.230 [2024-07-15 08:03:49.774785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.774815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.774947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.774977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.775250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.775282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.775460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.775491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.775613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.775645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.775851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.775883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.776023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.776054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.776300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.776337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.776533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.776565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.231 [2024-07-15 08:03:49.776756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.776788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.777041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.777072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.777204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.777243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.777512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.777544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.777729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.777760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.777944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.777975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.778113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.778144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.778337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.778370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.778484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.778515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.778738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.778769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.778909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.778941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.779143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.779179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.779368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.779399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.779601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.779632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.779784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.779816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.779932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.779963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.780266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.780298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.780570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.780602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.780782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.780813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.780987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.781019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.781212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.781252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.781451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.781482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.781609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.781640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.781839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.781871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.781997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.782028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.782244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.782276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.782482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.782513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.782651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.782682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.782894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.782925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.783175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.783207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.783400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.231 [2024-07-15 08:03:49.783432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.231 qpair failed and we were unable to recover it. 00:28:05.231 [2024-07-15 08:03:49.783557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.783588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.783721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.783752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.783950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.783981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.784184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.784215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.784482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.784515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.784635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.784667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.784862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.784893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.785131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.785203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.785489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.785533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.785729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.785761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.785965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.785996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.786181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.786213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.786370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.786402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.786542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.786573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.786767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.786798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.786981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.787011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.787140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.787171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.787451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.787483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.787691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.787722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.787839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.787870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.787997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.788027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.788307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.788339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.788451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.788482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.788674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.788705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.788856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.788887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.789022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.789053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.789191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.789222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.789355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.789386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.789633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.789663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.789809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.789841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.790034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.790064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.790270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.790301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.790487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.790519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.790724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.790756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.790946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.790982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.791283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.791315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.791508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.791540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.791715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.791745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.791931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.791963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.792156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.792186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.232 [2024-07-15 08:03:49.792720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.232 [2024-07-15 08:03:49.792756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.232 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.792942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.792974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.793085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.793115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.793252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.793283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.793505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.793537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.793797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.793828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.793961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.793993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.794186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.794217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.794426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.794459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.794636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.794667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.794789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.794821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.795089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.795120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.795335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.795367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.795659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.795691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.795891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.795922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.796168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.796199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.796388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.796419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.796688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.796719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.796898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.796929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.797140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.797170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.797306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.797338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.797538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.797569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.797699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.797730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.797855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.797886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.798160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.798190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.798391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.798422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.798551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.798583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.798703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.798734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.798925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.798958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.799094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.799125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.799252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.799285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.799465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.799496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.799621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.799652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.799786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.799817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.800065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.800096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.800214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.800259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.233 qpair failed and we were unable to recover it. 00:28:05.233 [2024-07-15 08:03:49.800519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.233 [2024-07-15 08:03:49.800549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.800794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.800825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.801007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.801037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.801257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.801290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.801475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.801507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.801703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.801733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.801932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.801963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.802161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.802193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.802405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.802450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.802672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.802704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.802838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.802870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.803011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.803042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.803222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.803268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.803421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.803453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.803636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.803667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.803813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.803844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.804001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.804032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.804235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.804267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.804406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.804437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.804666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.804698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.804882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.804912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.805105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.805136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.805321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.805353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.805477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.805508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.805705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.805736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.805991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.806022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.806296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.806331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.806463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.806494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.806704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.806735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.806974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.807005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.807125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.807156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.807435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.807468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.807661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.807693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.807909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.807940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.808069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.808100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.808293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.808325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.808503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.808534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.808729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.808760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.808946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.808976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.809091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.809121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.809340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.234 [2024-07-15 08:03:49.809372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.234 qpair failed and we were unable to recover it. 00:28:05.234 [2024-07-15 08:03:49.809566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.809597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.809808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.809839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.809983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.810015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.810216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.810255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.810445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.810476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.810657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.810688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.810890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.810920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.811100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.811130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.811380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.811412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.811581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.811612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.811757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.811788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.811922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.811953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.812219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.812265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.812532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.812563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.812742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.812772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.813015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.813047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.813247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.813278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.813473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.813504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.813642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.813672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.813918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.813949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.814088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.814119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.814300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.814332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.814625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.814657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.814871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.814902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.815038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.815068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.815206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.815245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.815497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.815529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.815709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.815740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.816008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.816040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.816181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.816212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.816435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.816466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.816660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.816691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.816816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.816846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.817045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.817076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.817202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.817242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.817438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.817469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.817715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.817746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.817994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.818025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.818160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.818190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.818404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.818445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.235 [2024-07-15 08:03:49.818639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.235 [2024-07-15 08:03:49.818669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.235 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.818804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.818834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.818974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.819005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.819183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.819214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.819291] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:05.236 [2024-07-15 08:03:49.819448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.819480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.819695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.819728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.819913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.819942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.820055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.820086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.820357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.820390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.820573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.820604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.820720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.820750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.820880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.820910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.821030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.821062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.821183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.821213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.821407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.821438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.821682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.821714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.821908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.821939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.822123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.822157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.822405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.822438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.822624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.822656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.822835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.822866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.823063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.823093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.823288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.823320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.823452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.823483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.823684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.823715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.823898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.823931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.824103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.824139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.824326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.824359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.824473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.824504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.824637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.824668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.824792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.824823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.825066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.825098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.825292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.825324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.825518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.825550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.825688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.825720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.825852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.825884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.826065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.826096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.826293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.826326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.826544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.826576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.826727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.826759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.826900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.826933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.236 qpair failed and we were unable to recover it. 00:28:05.236 [2024-07-15 08:03:49.827046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.236 [2024-07-15 08:03:49.827078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.827271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.827304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.827550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.827583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.827713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.827747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.827935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.827967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.828167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.828199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.828355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.828388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.828573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.828607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.828742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.828773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.828906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.828937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.829211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.829273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.829501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.829531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.829750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.829781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.830058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.830090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.830234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.830267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.830490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.830522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.830655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.830687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.830880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.830910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.831032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.831064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.831191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.831223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.831358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.831390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.831566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.831597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.831722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.831753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.831949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.831980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.832157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.832187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.832381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.832412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.832538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.832574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.832674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.832705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.832897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.832927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.833059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.833089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.833218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.833280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.833412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.833442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.833632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.833662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.833836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.833868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.834054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.834085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.834331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.834363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.834620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.237 [2024-07-15 08:03:49.834651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.237 qpair failed and we were unable to recover it. 00:28:05.237 [2024-07-15 08:03:49.834843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.834874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.835167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.835198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.835378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.835409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.835624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.835655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.835867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.835898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.836116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.836147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.836270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.836301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.836483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.836513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.836785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.836816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.837009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.837040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.837219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.837271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.837514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.837545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.837761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.837792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.838022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.838053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.838244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.838276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.838404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.838435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.838560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.838596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.838865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.838897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.839013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.839044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.839247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.839280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.839468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.839500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.839720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.839751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.839998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.840029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.840138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.840168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.840425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.840458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.840586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.840618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.840751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.840782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.840956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.840987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.841168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.841199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.841458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.841490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.841678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.841726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.841848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.841880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.842069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.842100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.842290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.842323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.842522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.842554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.842743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.842774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.842970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.843001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.843134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.843166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.843420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.238 [2024-07-15 08:03:49.843452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.238 qpair failed and we were unable to recover it. 00:28:05.238 [2024-07-15 08:03:49.843634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.843665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.843781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.843812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.844096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.844128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.844380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.844412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.844594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.844632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.844768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.844798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.845070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.845101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.845214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.845255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.845527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.845559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.845729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.845760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.845950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.845981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.846255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.846288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.846498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.846529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.846724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.846755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.846901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.846932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.847213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.847252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.847436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.847467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.847657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.847688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.847808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.847840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.848081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.848112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.848333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.848366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.848540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.848571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.848767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.848797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.849049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.849080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.849326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.849359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.849605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.849637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.849763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.849795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.849973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.850004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.850202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.850243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.850437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.850468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.850665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.850697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.850966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.851017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.851248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.851293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.851489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.851521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.851717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.851749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.851970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.852001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.852256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.852290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.852503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.852535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.852785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.852816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.853028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.853059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.239 [2024-07-15 08:03:49.853341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.239 [2024-07-15 08:03:49.853375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.239 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.853651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.853682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.853878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.853909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.854099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.854129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.854319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.854359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.854498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.854530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.854665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.854696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.854833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.854864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.854993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.855024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.855148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.855179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.855323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.855356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.855554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.855585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.855852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.855882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.856077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.856108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.856395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.856428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.856546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.856579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.856764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.856797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.856992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.857025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.857185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.857218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.857428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.857462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.857654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.857687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.857826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.857858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.858044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.858078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.858239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.858272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.858575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.858607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.858812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.858844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.859058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.859092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.859237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.859270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.859393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.859425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.859620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.859650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.859842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.859875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.860120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.860171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.860370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.860412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.860629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.860661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.860882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.860914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.861106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.861145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.861372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.861408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.861660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.861693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.861961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.861994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.862213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.862256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.240 [2024-07-15 08:03:49.862454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.240 [2024-07-15 08:03:49.862486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.240 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.862599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.862630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.862898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.862932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.863129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.863162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.863374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.863407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.863609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.863642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.863822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.863855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.864099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.864130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.864395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.864429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.864566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.864599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.864785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.864816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.864946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.864978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.865171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.865202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.865337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.865369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.865556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.865588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.865794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.865827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.866035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.866067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.866313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.866344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.866588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.866626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.866821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.866852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.866983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.867014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.867239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.867271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.867458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.867488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.867678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.867709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.867903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.867934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.868114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.868146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.868417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.868448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.868591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.868622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.868744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.868775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.868946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.868976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.869261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.869293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.869432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.869463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.869669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.869700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.869944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.869976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.870160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.870191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.870395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.870427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.870692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.870724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.870903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.870934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.871133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.871163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.871283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.871315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.871495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.871527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.241 [2024-07-15 08:03:49.871721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.241 [2024-07-15 08:03:49.871752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.241 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.871920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.871951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.872156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.872186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.872475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.872508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.872633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.872668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.872793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.872824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.873021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.873051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.873220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.873273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.873495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.873526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.873705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.873737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.873932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.873963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.874154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.874184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.874452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.874485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.874750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.874781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.874982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.875013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.875203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.875244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.875446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.875478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.875698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.875730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.875898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.875944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.876134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.876166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.876364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.876397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.876679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.876710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.876899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.876931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.877116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.877148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.877340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.877373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.877556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.877587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.877782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.877813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.878056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.878087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.878291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.878322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.878523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.878554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.878823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.878854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.879056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.879094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.879280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.879311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.879587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.879618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.879882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.879913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.880059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.242 [2024-07-15 08:03:49.880089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.242 qpair failed and we were unable to recover it. 00:28:05.242 [2024-07-15 08:03:49.880362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.880393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.880588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.880619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.880742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.880773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.880892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.880923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.881193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.881236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.881430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.881462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.881599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.881630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.881759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.881790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.882070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.882101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.882295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.882328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.882527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.882558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.882697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.882728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.882907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.882938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.883132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.883163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.883407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.883439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.883706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.883737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.883928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.883959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.884104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.884136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.884314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.884346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.884470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.884502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.884696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.884728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.884969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.885000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.885238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.885279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.885467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.885498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.885812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.885843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.886090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.886121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.886262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.886294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.886411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.886442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.886583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.886613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.886859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.886890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.887166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.887197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.887392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.887424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.887560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.887591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.887781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.887812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.887994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.888025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.888156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.888200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.888352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.888384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.888591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.888622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.888805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.888837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.889026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.243 [2024-07-15 08:03:49.889056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.243 qpair failed and we were unable to recover it. 00:28:05.243 [2024-07-15 08:03:49.889184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.889215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.889415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.889446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.889589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.889619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.889802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.889833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.890080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.890110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.890319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.890351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.890552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.890583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.890772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.890802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.890987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.891018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.891317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.891350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.891531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.891562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.891810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.891840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.892085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.892115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.892312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.892343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.892590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.892621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.892814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.892845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.893115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.893146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.893340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.893372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.893514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.893545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.893813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.893844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.894119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.894150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.894364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.894395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.894616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.894663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.894811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.894844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.895158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.895190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.895348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.895381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.895506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.895538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.895664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.895696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.895893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.895925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.896058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.896089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.896264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.896296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.896549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.896581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.896713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.896746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.896937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.896969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.897168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.897200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.897346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.897379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.897585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.897617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.897810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.897842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.898032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.898064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.898245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.244 [2024-07-15 08:03:49.898277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.244 qpair failed and we were unable to recover it. 00:28:05.244 [2024-07-15 08:03:49.898339] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:05.244 [2024-07-15 08:03:49.898366] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:05.245 [2024-07-15 08:03:49.898374] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:05.245 [2024-07-15 08:03:49.898380] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:05.245 [2024-07-15 08:03:49.898386] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:05.245 [2024-07-15 08:03:49.898463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.898493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.898494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:28:05.245 [2024-07-15 08:03:49.898622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.898651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.898601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:28:05.245 [2024-07-15 08:03:49.898706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:05.245 [2024-07-15 08:03:49.898708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:28:05.245 [2024-07-15 08:03:49.898826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.898856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.898998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.899029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.899143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.899175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.899386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.899420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.899550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.899582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.899782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.899815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.900004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.900035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.900283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.900317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.900590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.900622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.900841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.900872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.901065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.901097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.901371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.901404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.901534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.901565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.901686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.901717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.901970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.902002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.902279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.902311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.902525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.902558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.902685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.902717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.902932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.902964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.903099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.903131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.903269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.903303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.903567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.903598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.903720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.903751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.903878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.903909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.904088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.904120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.904253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.904286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.904416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.904446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.904626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.904658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.904903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.904935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.905128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.905160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.905360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.905399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.905590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.905623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.905769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.905801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.905982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.906015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.906208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.906247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.245 qpair failed and we were unable to recover it. 00:28:05.245 [2024-07-15 08:03:49.906440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.245 [2024-07-15 08:03:49.906472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.906673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.906705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.906888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.906921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.907058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.907090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.907294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.907327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.907524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.907555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.907737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.907769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.907949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.907981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.908180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.908213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.908419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.908451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.908573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.908605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.908793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.908825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.909035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.909067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.909205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.909248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.909521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.909554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.909814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.909847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.909983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.910015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.910144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.910176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.910372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.910406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.910531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.910563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.910757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.910788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.911011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.911044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.911202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.911257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.911502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.911535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.911733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.911764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.911954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.911988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.912176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.912207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.912461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.912495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.912751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.912782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.913001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.913033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.913210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.913252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.913446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.913478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.913610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.913642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.913767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.913801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.913942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.913975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.246 [2024-07-15 08:03:49.914236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.246 [2024-07-15 08:03:49.914278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.246 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.914530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.914563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.914692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.914723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.914934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.914966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.915190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.915223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.915390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.915423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.915560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.915593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.915709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.915740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.916051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.916084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.916214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.916256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.916477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.916510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.916774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.916808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.917059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.917091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.917287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.917321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.917459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.917490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.917763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.917797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.917932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.917964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.918105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.918138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.918354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.918387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.918662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.918695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.918842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.918875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.919054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.919085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.919238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.919272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.919453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.919485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.919620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.919652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.919924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.919956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.920138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.920170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.920351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.920412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.920532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.920563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.920758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.920790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.920975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.921007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.921135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.921165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.921365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.921397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.921524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.921555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.921683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.921714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.921961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.921992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.922176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.922207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.922349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.922381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.922558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.922589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.922796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.922827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.247 [2024-07-15 08:03:49.923026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.247 [2024-07-15 08:03:49.923064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.247 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.923276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.923308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.923507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.923538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.923671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.923702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.923874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.923906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.924201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.924241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.924384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.924416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.924616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.924648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.924769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.924801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.925003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.925037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.925203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.925245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.925498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.925532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.925728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.925761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.925957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.925993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.926191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.926233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.926435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.926471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.926624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.926660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.926798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.926833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.927030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.927066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.927316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.927350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.927484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.927517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.927762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.927794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.927997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.928029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.928169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.928201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.928334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.928365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.928482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.928513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.928761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.928792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.929019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.929084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.929367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.929402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.929595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.929626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.929826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.929857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.930105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.930136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.930308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.930341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.930543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.930576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.930711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.930742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.930873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.930903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.931037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.931068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.931347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.931381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.931568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.931600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.931791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.931821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.248 qpair failed and we were unable to recover it. 00:28:05.248 [2024-07-15 08:03:49.932028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.248 [2024-07-15 08:03:49.932059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.932250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.932282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.932478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.932509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.932689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.932720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.932968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.933000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.933138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.933169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.933383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.933415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.933684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.933719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.933994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.934029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.934168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.934202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.934360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.934393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.934587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.934621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.934868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.934900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.935173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.935204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.935339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.935383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.935587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.935619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.935885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.935916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.936109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.936140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.936356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.936388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.936502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.936533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.936776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.936807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.936986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.937016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.937156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.937187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.937393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.937425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.937565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.937595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.937699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.937730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.938020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.938055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.938246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.938282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.938491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.938528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.938715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.938750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.938970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.939005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.939270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.939306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.939501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.939533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.939743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.939775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.940022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.940054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.940254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.940286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.940467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.940498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.940682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.940713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.940844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.940876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.941067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.941098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.941279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.249 [2024-07-15 08:03:49.941310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.249 qpair failed and we were unable to recover it. 00:28:05.249 [2024-07-15 08:03:49.941458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.941496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.941637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.941668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.941799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.941830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.942016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.942047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.942243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.942274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.942397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.942427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.942556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.942587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.942714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.942744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.942875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.942906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.943097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.943129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.943401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.943434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.943627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.943659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.943931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.943962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.944239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.944272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.944429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.944462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.944644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.944675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.944876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.944919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.945113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.945144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.945364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.945397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.945533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.945563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.945756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.945788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.946034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.946068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.946203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.946252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.946383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.946414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.946531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.946562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.946814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.946849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.946991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.947027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.947162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.947193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.947370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.947402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.947603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.947634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.947825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.947857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.947990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.948024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.948159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.948190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.948413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.948445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.948574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.948604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.948754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.948789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.948984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.250 [2024-07-15 08:03:49.949019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.250 qpair failed and we were unable to recover it. 00:28:05.250 [2024-07-15 08:03:49.949209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.251 [2024-07-15 08:03:49.949255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.251 qpair failed and we were unable to recover it. 00:28:05.529 [2024-07-15 08:03:49.949519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.529 [2024-07-15 08:03:49.949553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.529 qpair failed and we were unable to recover it. 00:28:05.529 [2024-07-15 08:03:49.949685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.529 [2024-07-15 08:03:49.949718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.529 qpair failed and we were unable to recover it. 00:28:05.529 [2024-07-15 08:03:49.949862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.529 [2024-07-15 08:03:49.949894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.529 qpair failed and we were unable to recover it. 00:28:05.529 [2024-07-15 08:03:49.950105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.529 [2024-07-15 08:03:49.950144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.529 qpair failed and we were unable to recover it. 00:28:05.529 [2024-07-15 08:03:49.950276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.529 [2024-07-15 08:03:49.950308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.529 qpair failed and we were unable to recover it. 00:28:05.529 [2024-07-15 08:03:49.950424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.529 [2024-07-15 08:03:49.950457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.529 qpair failed and we were unable to recover it. 00:28:05.529 [2024-07-15 08:03:49.950655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.529 [2024-07-15 08:03:49.950688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.529 qpair failed and we were unable to recover it. 00:28:05.529 [2024-07-15 08:03:49.950879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.529 [2024-07-15 08:03:49.950911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.529 qpair failed and we were unable to recover it. 00:28:05.529 [2024-07-15 08:03:49.951036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.529 [2024-07-15 08:03:49.951067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.529 qpair failed and we were unable to recover it. 00:28:05.529 [2024-07-15 08:03:49.951316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.529 [2024-07-15 08:03:49.951351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.529 qpair failed and we were unable to recover it. 00:28:05.529 [2024-07-15 08:03:49.951531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.529 [2024-07-15 08:03:49.951563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.529 qpair failed and we were unable to recover it. 00:28:05.529 [2024-07-15 08:03:49.951777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.529 [2024-07-15 08:03:49.951814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.529 qpair failed and we were unable to recover it. 00:28:05.529 [2024-07-15 08:03:49.952032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.529 [2024-07-15 08:03:49.952064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.529 qpair failed and we were unable to recover it. 00:28:05.529 [2024-07-15 08:03:49.952261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.529 [2024-07-15 08:03:49.952294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.529 qpair failed and we were unable to recover it. 00:28:05.529 [2024-07-15 08:03:49.952483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.529 [2024-07-15 08:03:49.952514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.529 qpair failed and we were unable to recover it. 00:28:05.529 [2024-07-15 08:03:49.952646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.529 [2024-07-15 08:03:49.952677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.529 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.952809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.952841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.953071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.953103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.953295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.953327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.953523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.953554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.953674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.953704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.953903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.953934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.954059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.954090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.954243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.954275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.954461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.954492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.954663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.954702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.954816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.954847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.954991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.955021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.955147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.955178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.955330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.955362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.955562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.955594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.955727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.955757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.955938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.955969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.956156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.956187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.956396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.956428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.956647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.956678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.956946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.956977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.957173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.957203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.957407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.957439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.957671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.957702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.957815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.957845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.958115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.958145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.958301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.958333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.958584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.958615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.958845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.958905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.959114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.959146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.959294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.959326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.959574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.959605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.959822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.959852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.960077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.960108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.960283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.960316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.960562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.960593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.960771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.960803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.961004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.961034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.961171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.961202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.961482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.530 [2024-07-15 08:03:49.961515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.530 qpair failed and we were unable to recover it. 00:28:05.530 [2024-07-15 08:03:49.961717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.961747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.962025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.962063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.962280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.962312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.962448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.962479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.962675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.962707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.962888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.962919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.963163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.963194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.963407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.963454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.963671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.963702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.963886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.963918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.964035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.964066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.964248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.964280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.964410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.964442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.964664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.964696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.964838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.964869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.964988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.965019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.965214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.965257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.965436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.965467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.965647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.965678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.965869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.965900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.966024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.966055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.966193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.966233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.966476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.966507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.966646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.966677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.966921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.966952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.967084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.967114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.967319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.967351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.967474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.967505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.967638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.967672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.967856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.967888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.968065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.968096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.968296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.968328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.968462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.968494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.968620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.968650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.968774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.968805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.968942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.968973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.969171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.969202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.969341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.969372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.969501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.969532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.969723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.969754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.531 qpair failed and we were unable to recover it. 00:28:05.531 [2024-07-15 08:03:49.969945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.531 [2024-07-15 08:03:49.969976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.970122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.970153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.970409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.970440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.970623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.970655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.970898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.970929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.971067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.971098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.971279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.971311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.971492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.971524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.971715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.971746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.971873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.971905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.972118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.972149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.972330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.972363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.972573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.972603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.972795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.972826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.973073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.973105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.973382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.973415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.973601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.973632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.973804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.973835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.974083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.974114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.974322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.974354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.974571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.974601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.974722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.974753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.974936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.974967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.975181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.975212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.975442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.975474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.975657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.975688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.975952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.975984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.976183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.976214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.976429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.976467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.976717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.976748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.976874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.976904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.977029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.977060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.977165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.977196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.977344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.977380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.977583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.977615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.977808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.977840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.978088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.978119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.978310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.978342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.978519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.978551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.978742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.978773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.532 qpair failed and we were unable to recover it. 00:28:05.532 [2024-07-15 08:03:49.978910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.532 [2024-07-15 08:03:49.978941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.979077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.979108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.979246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.979278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.979388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.979420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.979687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.979717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.979850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.979882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.980157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.980189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.980401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.980434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.980581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.980612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.980824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.980855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.981048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.981079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.981206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.981249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.981447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.981478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.981613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.981644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.981848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.981879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.982088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.982121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.982241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.982274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.982459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.982489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.982668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.982700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.982871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.982901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.983082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.983113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.983242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.983274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.983470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.983501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.983688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.983719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.983838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.983869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.984114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.984144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.984274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.984307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.984513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.984544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.984671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.984708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.984919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.984950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.985193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.985235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.985419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.985451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.985725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.985756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.533 [2024-07-15 08:03:49.985943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.533 [2024-07-15 08:03:49.985974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.533 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.986250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.986281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.986493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.986525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.986717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.986748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.986944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.986975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.987171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.987202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.987488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.987520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.987715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.987746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.987990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.988022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.988282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.988315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.988446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.988480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.988641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.988674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.988858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.988889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.989084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.989115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.989364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.989396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.989581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.989612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.989845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.989875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.990048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.990080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.990347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.990378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.990522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.990553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.990830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.990861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.991039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.991070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.991203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.991244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.991433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.991463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.991653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.991684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.991972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.992003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.992185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.992216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.992418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.992450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.992700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.992731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.992855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.992886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.993093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.993124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.993269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.993301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.993493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.993525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.993733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.993764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.994009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.994040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.994261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.994304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.994529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.994561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.994850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.994881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.995061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.995092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.534 [2024-07-15 08:03:49.995287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.534 [2024-07-15 08:03:49.995319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.534 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:49.995592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:49.995623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:49.995820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:49.995850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:49.995975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:49.996006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:49.996185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:49.996216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:49.996522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:49.996553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:49.996830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:49.996861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:49.997058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:49.997089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:49.997272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:49.997305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:49.997562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:49.997593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:49.997791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:49.997823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:49.998077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:49.998108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:49.998354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:49.998386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:49.998631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:49.998665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:49.998856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:49.998887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:49.999068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:49.999100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:49.999241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:49.999272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:49.999449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:49.999480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:49.999677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:49.999708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:49.999906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:49.999936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:50.000120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:50.000151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:50.000277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:50.000309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:50.000510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:50.000542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:50.000680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:50.000711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:50.000891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:50.000922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:50.001050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:50.001081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:50.001302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:50.001335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:50.001468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:50.001504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:50.001629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:50.001660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:50.001955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:50.001986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:50.002175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:50.002207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:50.002444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:50.002476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:50.002608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:50.002639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:50.002850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:50.002881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:50.003171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:50.003203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:50.003430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:50.003462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:50.003601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:50.003639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:50.003849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:50.003881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:50.004102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:50.004133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:50.004369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:50.004401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:50.004622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.535 [2024-07-15 08:03:50.004654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.535 qpair failed and we were unable to recover it. 00:28:05.535 [2024-07-15 08:03:50.004867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.004899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.005081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.005112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.005368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.005401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.005555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.005587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.005833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.005864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.005990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.006026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.006328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.006360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.006515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.006546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.006677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.006708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.006843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.006874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.007065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.007095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.007287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.007319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.007453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.007485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.007683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.007714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.007855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.007886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.008068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.008099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.008324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.008355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.008581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.008613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.008757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.008788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.008926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.008957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.009091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.009123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.009314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.009346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.009512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.009555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.009698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.009742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.009999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.010062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.010253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.010299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.010468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.010510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.010652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.010693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.010863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.010904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.011115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.011156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.011414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.011466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.011758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.011897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.012162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.012278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.012634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.012681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.012899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.012938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.013103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.013143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.013287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.013321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.013451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.013482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.013601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.013632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.013767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.013798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.536 [2024-07-15 08:03:50.013926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.536 [2024-07-15 08:03:50.013959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.536 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.014094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.014125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.014346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.014380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.014565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.014596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.014713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.014743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.014867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.014899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.015008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.015039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.015164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.015197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.015407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.015452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.015650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.015684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.015822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.015853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.016039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.016072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.016203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.016243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.016384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.016415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.016596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.016627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.016775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.016806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.016922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.016953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.017152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.017184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.017323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.017356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.017491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.017522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.017639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.017671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.017856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.017887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.018015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.018046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.018259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.018293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.018436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.018468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.018603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.018634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.018752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.018784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.018989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.019021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.019145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.019177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.019386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.019419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.019607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.019638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.019839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.019871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.020084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.020116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.020246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.020279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.020408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.020440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.020632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.020669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.020865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.020897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.021119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.021151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.021350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.021384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.021656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.021688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.537 [2024-07-15 08:03:50.021881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.537 [2024-07-15 08:03:50.021912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.537 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.022109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.022142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.022409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.022443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.022694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.022727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.022918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.022951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.023212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.023253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.023444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.023476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.023668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.023700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.023825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.023856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.023988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.024020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.024154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.024186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.024377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.024408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.024603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.024635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.024750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.024781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.024971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.025003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.025131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.025162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.025308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.025340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.025522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.025554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.025736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.025767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.025949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.025980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.026248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.026281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.026415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.026447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.026728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.026760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.026909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.026942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.027134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.027166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.027367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.027400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.027576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.027608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.027803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.027835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.028056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.028088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.028280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.028312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.028504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.028535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.028739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.028771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.028912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.028944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.029121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.029154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.029348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.029380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.538 qpair failed and we were unable to recover it. 00:28:05.538 [2024-07-15 08:03:50.029563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-07-15 08:03:50.029600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.029797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.029829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.030075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.030106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.030262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.030296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.030544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.030576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.030717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.030750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.030885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.030918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.031162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.031194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.031342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.031381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.031518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.031550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.031733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.031766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.031901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.031933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.032112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.032144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.032414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.032447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.032579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.032612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.032806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.032837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.032963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.032994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.033207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.033251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.033457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.033488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.033686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.033718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.033838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.033870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.034003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.034035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.034215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.034257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.034384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.034416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.034660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.034692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.034818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.034850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.035032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.035064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.035236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.035298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.035582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.035617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.035734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.035765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.035956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.035987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.036253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.036286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.036416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.036448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.036646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.036680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.036862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.036894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.037030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.037066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.037263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.037296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.037499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.037533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.037717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.037749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.037883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.037914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.539 [2024-07-15 08:03:50.038115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-07-15 08:03:50.038153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.539 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.038336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.038368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.038585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.038623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.038817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.038849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.039098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.039130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.039316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.039349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.039550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.039582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.039703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.039734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.039948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.039979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.040250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.040283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.040404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.040435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.040633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.040663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.040787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.040818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.040952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.040984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.041189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.041220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.041327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.041359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.041485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.041517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.041790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.041822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.042032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.042064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.042266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.042298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.042427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.042458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.042585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.042616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.042859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.042890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.043007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.043038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.043215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.043256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.043436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.043468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.043579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.043611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.043835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.043897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.044183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.044218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.044369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.044400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.044547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.044577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.044704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.044735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.044856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.044886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.045150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.045181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.045386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.045417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.045610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.045641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.045743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.045773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.045905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.045936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.046048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.046080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.046195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.046242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.046381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-07-15 08:03:50.046420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.540 qpair failed and we were unable to recover it. 00:28:05.540 [2024-07-15 08:03:50.046608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.046640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.046757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.046787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.047033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.047064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.047239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.047272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.047387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.047417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.047693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.047725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.047923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.047955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.048135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.048166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.048351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.048384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.048647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.048679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.048970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.049002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.049193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.049234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.049505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.049536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.049724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.049756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.050031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.050062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.050188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.050218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.050377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.050408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.050524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.050555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.050668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.050699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.050810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.050841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.051123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.051153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.051358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.051390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.051510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.051541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.051763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.051794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.051985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.052016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.052243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.052275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.052548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.052590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.052790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.052822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.052963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.052995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.053134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.053165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.053307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.053339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.053540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.053573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.053688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.053721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.053901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.053931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.054139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.054170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.054435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.054466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.054656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.054687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.054835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.054867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.055059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.055091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.055283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.055316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.541 qpair failed and we were unable to recover it. 00:28:05.541 [2024-07-15 08:03:50.055519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.541 [2024-07-15 08:03:50.055551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.055799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.055831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.056012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.056044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.056170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.056202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.056414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.056447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.056579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.056610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.056800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.056832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.057094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.057125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.057314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.057347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.057483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.057514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.057710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.057741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.057921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.057953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.058153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.058185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.058311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.058350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.058607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.058638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.058853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.058883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.059001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.059031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.059161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.059193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.059405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.059442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.059642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.059673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.059785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.059816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.060002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.060034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.060221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.060263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.060386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.060418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.060559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.060591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.060786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.060817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.061006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.061041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.061204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.061244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.061372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.061404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.061575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.061607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.061725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.061755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.061874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.061905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.062087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.062119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.062252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.062284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.062437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.062468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.542 [2024-07-15 08:03:50.062581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.542 [2024-07-15 08:03:50.062613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.542 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.062793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.062839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.063164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.063261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.063526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.063572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.063992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.064101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.064340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.064406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.064607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.064651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.064872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.064977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.065239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.065306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.065672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.065746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.066055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.066192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.066514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.066561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.066712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.066746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.066880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.066912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.067063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.067094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.067218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.067261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.067393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.067427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.067562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.067593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.067725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.067763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.067886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.067917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.068055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.068086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.068203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.068242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.068424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.068454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.068635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.068667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.068848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.068879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.069124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.069155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.069303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.069334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.069472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.069503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.069638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.069669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.069837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.069867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.070056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.070087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.070215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.070263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.070473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.070505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.070624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.070655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.070787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.070819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.070998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.071029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.071218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.071260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.071477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.071509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.071688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.071719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.071967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.071998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.543 [2024-07-15 08:03:50.072134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.543 [2024-07-15 08:03:50.072165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.543 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.072288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.072320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.072491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.072523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.072802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.072833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.072964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.072995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.073271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.073328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.073541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.073573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.073754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.073786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.073919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.073951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.074152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.074183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.074372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.074404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.074671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.074702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.074813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.074844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.074983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.075014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.075142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.075174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.075409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.075442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.075622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.075653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.075831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.075862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.076114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.076152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.076349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.076383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.076658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.076689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.076867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.076897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.077113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.077145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.077339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.077372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.077503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.077534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.077780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.077811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.077938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.077969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.078084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.078116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.078312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.078343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.078552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.078583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.078835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.078866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.078996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.079027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.079237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.079270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.079568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.079599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.079844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.079875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.080070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.080101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.080223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.080265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.080473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.080505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.080616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.080648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.080895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.080928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.081112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.544 [2024-07-15 08:03:50.081143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.544 qpair failed and we were unable to recover it. 00:28:05.544 [2024-07-15 08:03:50.081334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.081368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.081515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.081547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.081827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.081858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.082035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.082066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.082363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.082415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.082569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.082603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.082821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.082853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.083043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.083075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.083210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.083256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.083460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.083491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.083620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.083649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.083788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.083819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.083964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.083994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.084209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.084249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.084396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.084427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.084603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.084635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.084766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.084797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.085070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.085102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.085244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.085280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.085451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.085482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.085672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.085704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.085975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.086008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.086144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.086176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.086385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.086418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.086616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.086648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.086841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.086873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.087124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.087156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.087287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.087320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.087456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.087488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.087622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.087654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.087918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.087950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.088074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.088109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.088305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.088339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.088491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.088522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.088779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.088812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.088935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.088966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.089157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.089188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.089404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.089436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.089642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.089673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.545 [2024-07-15 08:03:50.089890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.545 [2024-07-15 08:03:50.089923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.545 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.090056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.090087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.090221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.090264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.090449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.090480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.090674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.090705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.090841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.090877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.091009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.091040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.091302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.091335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.091535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.091566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.091757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.091789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.092067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.092098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.092288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.092320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.092509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.092540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.092698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.092729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.092867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.092899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.093091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.093122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.093258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.093291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.093424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.093456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.093750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.093783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.093907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.093939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.094067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.094100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.094218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.094261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.094536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.094569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.094747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.094779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.094965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.094996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.095248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.095281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.095476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.095509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.095628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.095659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.095797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.095828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.095944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.095976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.096093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.096125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.096261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.096295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.096456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.096503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.096712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.096744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.096922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.096953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.097101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.097133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.546 qpair failed and we were unable to recover it. 00:28:05.546 [2024-07-15 08:03:50.097327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.546 [2024-07-15 08:03:50.097362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.097565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.097596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.097733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.097764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.098023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.098053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.098179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.098211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.098338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.098370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.098590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.098620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.098865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.098897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.099126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.099156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.099292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.099331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.099467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.099498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.099622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.099654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.099799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.099831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.100102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.100133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.100387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.100419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.100545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.100577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.100699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.100730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.100931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.100962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.101146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.101177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.101378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.101410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.101591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.101622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.101839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.101869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.102113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.102144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.102341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.102373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.102564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.102595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.102775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.102806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.102981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.103012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.103258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.103290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.103407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.103439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.103626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.103657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.103785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.103816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.103998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.104029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.104314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.104346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.104534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.104565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.104769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.104800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.104989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.105019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.105158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.105193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.105330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.105363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.105478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.105508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.105714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.547 [2024-07-15 08:03:50.105745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.547 qpair failed and we were unable to recover it. 00:28:05.547 [2024-07-15 08:03:50.105943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.105974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.106103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.106133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.106247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.106278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.106406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.106437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.106631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.106663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.106782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.106812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.106997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.107028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.107152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.107184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.107417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.107449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.107558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.107589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.107794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.107825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.108018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.108049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.108173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.108205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.108351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.108382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.108558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.108589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.108779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.108810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.108992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.109024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.109151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.109182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.109368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.109399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.109526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.109557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.109754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.109784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.110013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.110044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.110235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.110267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.110447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.110484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.110604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.110636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.110848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.110878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.111059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.111090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.111212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.111256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.111437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.111467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.111646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.111678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.111861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.111892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.112157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.112188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.112397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.112429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.112690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.112721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.112924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.112955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.113090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.113121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.113271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.113303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.113582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.113614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.113728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.113758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.113881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.113912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.548 [2024-07-15 08:03:50.114095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.548 [2024-07-15 08:03:50.114126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.548 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.114391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.114423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.114669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.114701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.114891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.114921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.115039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.115071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.115207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.115257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.115392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.115424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.115631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.115663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.115801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.115832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.115961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.115994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.116174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.116206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.116478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.116511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.116637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.116669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.116801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.116832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.117092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.117124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.117321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.117353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.117479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.117511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.117701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.117731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.117921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.117952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.118069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.118100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.118281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.118313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.118562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.118593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.118783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.118815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.118939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.118970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.119172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.119209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.119330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.119362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.119501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.119533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.119664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.119695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.119884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.119920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.120034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.120066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.120204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.120248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.120469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.120500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.120631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.120663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.120849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.120881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.121000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.121032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.121165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.121197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.121353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.121389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.121639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.121671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.121886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.121919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.122029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.122061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.122191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.122223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.549 [2024-07-15 08:03:50.122517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.549 [2024-07-15 08:03:50.122548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.549 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.122692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.122723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.122920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.122951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.123083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.123114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.123262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.123296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.123419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.123450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.123632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.123663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.123915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.123948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.124057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.124088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.124281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.124314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.124433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.124468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.124695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.124727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.124851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.124882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.125024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.125055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.125298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.125330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.125461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.125492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.125738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.125769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.125983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.126014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.126145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.126176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.126401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.126433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.126570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.126601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.126750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.126781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.127047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.127078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.127327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.127359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.127567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.127599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.127723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.127756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.127900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.127932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.128116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.128148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.128335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.128367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.128633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.128665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.128793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.128824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.128944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.128975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.129164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.129196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.129336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.129380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.129526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.129559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.129698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.129730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.129925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.129957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.130088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.130125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.130310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.130343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.130540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.130571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.130710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.130741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.550 qpair failed and we were unable to recover it. 00:28:05.550 [2024-07-15 08:03:50.130944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.550 [2024-07-15 08:03:50.130976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.131179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.131210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.131420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.131452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.131638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.131670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.131801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.131832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.131973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.132004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.132212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.132256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.132398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.132429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.132592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.132623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.132871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.132902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.133044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.133075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.133274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.133307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.133515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.133547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.133736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.133767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.133983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.134014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.134278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.134310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.134434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.134465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.134642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.134673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.134800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.134831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.135006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.135038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.135283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.135315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.135477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.135508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.135704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.135736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.135873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.135905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.136009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.136040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.136219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.136260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.136459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.136490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.136638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.136669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.136921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.136953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.137217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.137261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.137549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.137580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.137759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.137790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.137931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.137962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.138113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.138144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.138325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.138358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.551 [2024-07-15 08:03:50.138507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.551 [2024-07-15 08:03:50.138538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.551 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.138679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.138716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.138863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.138898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.139095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.139131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.139381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.139414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.139598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.139632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.139906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.139938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.140209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.140249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.140433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.140465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.140725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.140756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.140901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.140933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.141124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.141155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.141453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.141486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.141677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.141709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.141975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.142006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.142188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.142221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.142448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.142480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.142595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.142625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.142907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.142939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.143128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.143160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.143424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.143455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.143706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.143737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.143920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.143951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.144152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.144184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.144476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.144508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.144696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.144727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.145044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.145076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.145349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.145381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.145678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.145710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.145965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.145997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.146302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.146334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.146514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.146545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.146796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.146828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.146978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.147010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.147203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.147245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.147456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.147488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.147732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.147763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.147980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.148011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.148276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.148308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.552 [2024-07-15 08:03:50.148578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.552 [2024-07-15 08:03:50.148609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.552 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.148857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.148888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.149132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.149169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.149391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.149424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.149620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.149651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.149971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.150002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.150183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.150214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.150465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.150496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.150766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.150797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.151055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.151086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.151361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.151393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.151542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.151573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.151755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.151787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.152034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.152065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.152200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.152241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.152437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.152469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.152685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.152717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.152896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.152928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.153176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.153208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.153435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.153468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.153648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.153679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.153947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.153978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.154273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.154306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.154549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.154580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.154777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.154808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.155080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.155111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.155370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.155401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.155652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.155684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.155930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.155962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.156212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.156262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.156507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.156539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.156754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.156785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.157051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.157082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.157329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.157360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.157654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.157685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.157966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.157997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.158118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.158148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.158419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.158452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.158719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.158750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.158967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.158997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.553 qpair failed and we were unable to recover it. 00:28:05.553 [2024-07-15 08:03:50.159290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.553 [2024-07-15 08:03:50.159323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.159566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.159598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.159914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.159950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.160200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.160239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.160428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.160460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.160680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.160711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.160901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.160931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.161201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.161239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.161521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.161552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.161802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.161833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.162017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.162048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.162340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.162373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.162625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.162656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.162891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.162922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.163167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.163198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.163422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.163462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.163672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.163703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.163969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.164000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.164284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.164317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.164601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.164632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.164758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.164789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.164981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.165012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.165235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.165267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.165513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.165544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.165809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.165840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.166053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.166083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.166335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.166367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.166587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.166618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.166875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.166908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.167135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.167176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.167501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.167537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.167742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.167773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.168071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.168102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.168300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.168332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.168603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.168635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.168914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.168945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.169062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.169093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.169291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.169323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.169450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.169482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.169757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.169789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.554 [2024-07-15 08:03:50.170008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.554 [2024-07-15 08:03:50.170039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.554 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.170243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.170273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.170520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.170555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.170853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.170885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.171153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.171185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.171323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.171356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.171627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.171658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.171847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.171878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.172147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.172178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.172394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.172425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.172672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.172703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.172945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.172976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.173222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.173272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.173565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.173596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.173812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.173843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.174039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.174071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.174267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.174301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.174494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.174525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.174772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.174803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.175071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.175102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.175313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.175345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.175638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.175670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.175854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.175885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.176158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.176189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.176408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.176440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.176681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.176712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.176912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.176942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.177186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.177217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.177479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.177511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.177765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.177802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.178054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.178086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.178302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.178334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.178465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.178497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.178689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.178721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.178987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.555 [2024-07-15 08:03:50.179019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.555 qpair failed and we were unable to recover it. 00:28:05.555 [2024-07-15 08:03:50.179236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.179268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.179540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.179571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.179763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.179795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.179992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.180024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.180293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.180327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.180527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.180558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.180808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.180839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.181083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.181115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.181311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.181344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.181619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.181651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.181929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.181961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.182092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.182123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.182418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.182452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.182722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.182753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.182886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.182917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.183104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.183135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.183285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.183316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.183596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.183626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.183758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.183790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.184058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.184089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.184341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.184374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.184635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.184672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.184940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.184972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.185163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.185196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.185456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.185487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.185779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.185810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.185941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.185973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.186262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.186294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.186569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.186601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.186785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.186817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.187032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.187063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.187335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.187367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.187587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.187618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.187889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.187921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.188210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.188252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.188477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.188508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.188781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.188812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.189077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.189109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.189334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.189368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.189614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.189646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.189895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.556 [2024-07-15 08:03:50.189926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.556 qpair failed and we were unable to recover it. 00:28:05.556 [2024-07-15 08:03:50.190155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.190186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.190394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.190426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.190616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.190647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.190867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.190899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.191141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.191172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.191439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.191471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.191655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.191687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.191884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.191915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.192106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.192138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.192403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.192434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.192732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.192763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.193058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.193089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.193273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.193306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.193597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.193629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.193870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.193901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.194154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.194185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.194377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.194410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.194587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.194618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.194823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.194855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.195071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.195103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.195287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.195318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.195525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.195564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.195817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.195848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.196096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.196127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.196265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.196297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.196544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.196575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.196753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.196784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.197049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.197080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.197278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.197310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.197565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.197596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.197842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.197873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.198053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.198084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.198266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.198298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.198417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.198448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.198718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.198756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.198897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.198927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.199193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.199233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.199366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.199397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.199577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.199607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.199807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.199837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.557 [2024-07-15 08:03:50.200023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.557 [2024-07-15 08:03:50.200054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.557 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.200352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.200384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.200567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.200598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.200846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.200877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.201187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.201218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.201523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.201556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.201823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.201853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.202058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.202088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.202315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.202348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.202527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.202558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.202764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.202795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.203061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.203091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.203318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.203349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.203564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.203595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.203817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.203848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.204030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.204061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.204325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.204356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.204547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.204577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.204845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.204877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.205061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.205091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.205362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.205393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.205692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.205729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.205998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.206030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.206159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.206190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.206472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.206505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.206709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.206740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.206987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.207018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.207293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.207324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.207611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.207642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.207917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.207948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.208145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.208176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.208415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.208447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.208718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.208749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.208962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.208993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.209246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.209284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.209555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.209587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.209835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.209867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.210126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.210157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.210376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.210409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.210659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.210690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.210950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.558 [2024-07-15 08:03:50.210981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.558 qpair failed and we were unable to recover it. 00:28:05.558 [2024-07-15 08:03:50.211193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.211234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.211482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.211513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.211709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.211740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.212013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.212044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.212325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.212356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.212578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.212610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.212828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.212859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.213055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.213086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.213338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.213371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.213645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.213676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.213963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.213995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.214270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.214302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.214585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.214616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.214888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.214920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.215105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.215137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.215398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.215430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.215679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.215710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.215982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.216013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.216308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.216340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.216616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.216647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.216855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.216891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.217073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.217104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.217374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.217406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.217681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.217712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.217906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.217937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.218129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.218160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.218430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.218462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.218657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.218688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.218935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.218965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.219172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.219203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.219483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.219514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.219694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.219726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.220003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.220035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.220299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.220330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.220549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.220580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.220841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.559 [2024-07-15 08:03:50.220873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.559 qpair failed and we were unable to recover it. 00:28:05.559 [2024-07-15 08:03:50.221002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.560 [2024-07-15 08:03:50.221033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.560 qpair failed and we were unable to recover it. 00:28:05.560 [2024-07-15 08:03:50.221302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.560 [2024-07-15 08:03:50.221334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.560 qpair failed and we were unable to recover it. 00:28:05.560 [2024-07-15 08:03:50.221621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.560 [2024-07-15 08:03:50.221652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.560 qpair failed and we were unable to recover it. 00:28:05.560 [2024-07-15 08:03:50.221932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.560 [2024-07-15 08:03:50.221963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.560 qpair failed and we were unable to recover it. 00:28:05.560 [2024-07-15 08:03:50.222242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.560 [2024-07-15 08:03:50.222274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.560 qpair failed and we were unable to recover it. 00:28:05.560 [2024-07-15 08:03:50.222559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.560 [2024-07-15 08:03:50.222590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.560 qpair failed and we were unable to recover it. 00:28:05.560 [2024-07-15 08:03:50.222785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.560 [2024-07-15 08:03:50.222817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.560 qpair failed and we were unable to recover it. 00:28:05.560 [2024-07-15 08:03:50.223065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.560 [2024-07-15 08:03:50.223096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.560 qpair failed and we were unable to recover it. 00:28:05.560 [2024-07-15 08:03:50.223374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.560 [2024-07-15 08:03:50.223405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.560 qpair failed and we were unable to recover it. 00:28:05.560 [2024-07-15 08:03:50.223653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.560 [2024-07-15 08:03:50.223686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.560 qpair failed and we were unable to recover it. 00:28:05.560 [2024-07-15 08:03:50.223883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.560 [2024-07-15 08:03:50.223915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.560 qpair failed and we were unable to recover it. 00:28:05.560 [2024-07-15 08:03:50.224116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.560 [2024-07-15 08:03:50.224147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.560 qpair failed and we were unable to recover it. 00:28:05.560 [2024-07-15 08:03:50.224381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.560 [2024-07-15 08:03:50.224412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.560 qpair failed and we were unable to recover it. 00:28:05.560 [2024-07-15 08:03:50.224632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.560 [2024-07-15 08:03:50.224663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.560 qpair failed and we were unable to recover it. 00:28:05.560 [2024-07-15 08:03:50.224846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.560 [2024-07-15 08:03:50.224877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.560 qpair failed and we were unable to recover it. 00:28:05.560 [2024-07-15 08:03:50.225008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.560 [2024-07-15 08:03:50.225039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.560 qpair failed and we were unable to recover it. 00:28:05.560 [2024-07-15 08:03:50.225267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.560 [2024-07-15 08:03:50.225300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.560 qpair failed and we were unable to recover it. 00:28:05.560 [2024-07-15 08:03:50.225517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.560 [2024-07-15 08:03:50.225548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.560 qpair failed and we were unable to recover it. 00:28:05.560 [2024-07-15 08:03:50.225791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.560 [2024-07-15 08:03:50.225822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.560 qpair failed and we were unable to recover it. 00:28:05.560 [2024-07-15 08:03:50.225954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.560 [2024-07-15 08:03:50.225985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.560 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.226161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.226193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.226487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.226529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.226777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.226808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.227103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.227135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.227410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.227449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.227694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.227726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.227976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.228007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.228262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.228296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.228546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.228577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.228849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.228881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.229169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.229200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.229440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.229471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.229664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.229696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.229889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.229919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.230114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.230145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.230390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.230422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.230547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.230579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.230797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.230829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.230976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.231008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.231276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.231308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.231498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.231529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.231729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.231760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.232032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.232063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.232355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.232387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.232604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.232636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.233071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.233107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.233369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.233405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.233679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.233710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.233950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.233981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.234254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.234287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.234583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.234614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.234908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.234946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.235154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.235185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.235476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.235509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.235653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.235684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.235927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.235958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.236204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.236245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.236439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.236471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.236664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.236695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.236961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.236993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.561 qpair failed and we were unable to recover it. 00:28:05.561 [2024-07-15 08:03:50.237282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.561 [2024-07-15 08:03:50.237315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.237597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.237629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.237905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.237936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.238195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.238236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.238530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.238562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.238889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.238937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.239135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.239166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.239470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.239503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.239703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.239734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.239869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.239901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.240170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.240203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.240472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.240505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.240798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.240830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.241033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.241064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.241339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.241373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.241586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.241619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.241878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.241909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.242103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.242135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.242405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.242443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.242726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.242758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.242903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.242934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.243179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.243211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.243484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.243516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.243803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.243834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.244091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.244121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.244382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.244413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.244608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.244639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.244883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.244914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.245108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.245140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.245387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.245420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.245557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.245588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.245835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.245866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.246171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.246202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.246354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.246386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.246571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.246603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.246869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.246900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.247147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.247179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.247392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.247425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.247685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.247717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.248015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.248047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.248271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.562 [2024-07-15 08:03:50.248304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.562 qpair failed and we were unable to recover it. 00:28:05.562 [2024-07-15 08:03:50.248572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.248604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.248803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.248835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.249091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.249122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.249417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.249451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.249671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.249707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.249889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.249921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.250054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.250086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.250373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.250406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.250596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.250628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.250889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.250919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.251131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.251162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.251413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.251445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.251689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.251720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.251950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.251981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.252173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.252205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.252408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.252439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.252583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.252615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.252808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.252845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.253089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.253120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.253371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.253404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.253601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.253632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.253842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.253873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.254057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.254089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.254297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.254330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.254534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.254564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.254776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.254807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.255098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.255129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.255312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.255344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.255593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.255624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.255867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.255898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.256078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.256109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.256384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.256417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.256614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.256645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.256851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.256882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.257149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.257180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.257389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.257422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.257676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.257707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.257973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.258004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.258282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.258315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.258574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.563 [2024-07-15 08:03:50.258605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.563 qpair failed and we were unable to recover it. 00:28:05.563 [2024-07-15 08:03:50.258793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.564 [2024-07-15 08:03:50.258824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.564 qpair failed and we were unable to recover it. 00:28:05.564 [2024-07-15 08:03:50.259097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.564 [2024-07-15 08:03:50.259128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.564 qpair failed and we were unable to recover it. 00:28:05.564 [2024-07-15 08:03:50.259325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.564 [2024-07-15 08:03:50.259357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.564 qpair failed and we were unable to recover it. 00:28:05.564 [2024-07-15 08:03:50.259500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.564 [2024-07-15 08:03:50.259531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.564 qpair failed and we were unable to recover it. 00:28:05.564 [2024-07-15 08:03:50.259679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.564 [2024-07-15 08:03:50.259712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.564 qpair failed and we were unable to recover it. 00:28:05.564 [2024-07-15 08:03:50.259986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.564 [2024-07-15 08:03:50.260017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.564 qpair failed and we were unable to recover it. 00:28:05.564 [2024-07-15 08:03:50.260271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.564 [2024-07-15 08:03:50.260303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.564 qpair failed and we were unable to recover it. 00:28:05.564 [2024-07-15 08:03:50.260493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.564 [2024-07-15 08:03:50.260524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.564 qpair failed and we were unable to recover it. 00:28:05.564 [2024-07-15 08:03:50.260770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.564 [2024-07-15 08:03:50.260801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.564 qpair failed and we were unable to recover it. 00:28:05.564 [2024-07-15 08:03:50.261020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.564 [2024-07-15 08:03:50.261051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.564 qpair failed and we were unable to recover it. 00:28:05.564 [2024-07-15 08:03:50.261249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.564 [2024-07-15 08:03:50.261282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.564 qpair failed and we were unable to recover it. 00:28:05.564 [2024-07-15 08:03:50.261463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.564 [2024-07-15 08:03:50.261494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.564 qpair failed and we were unable to recover it. 00:28:05.564 [2024-07-15 08:03:50.261692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.564 [2024-07-15 08:03:50.261725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.564 qpair failed and we were unable to recover it. 00:28:05.564 [2024-07-15 08:03:50.261932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.564 [2024-07-15 08:03:50.261962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.564 qpair failed and we were unable to recover it. 00:28:05.564 [2024-07-15 08:03:50.262118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.564 [2024-07-15 08:03:50.262149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.564 qpair failed and we were unable to recover it. 00:28:05.843 [2024-07-15 08:03:50.262450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-07-15 08:03:50.262483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-07-15 08:03:50.262686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-07-15 08:03:50.262718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-07-15 08:03:50.262913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-07-15 08:03:50.262950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-07-15 08:03:50.263144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-07-15 08:03:50.263174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-07-15 08:03:50.263444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-07-15 08:03:50.263475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-07-15 08:03:50.263603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.843 [2024-07-15 08:03:50.263633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.843 qpair failed and we were unable to recover it. 00:28:05.843 [2024-07-15 08:03:50.263760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.263790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.264092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.264123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.264390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.264421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.264593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.264624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.264743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.264774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.264968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.265000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.265248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.265280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.265506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.265537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.265744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.265775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.265917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.265949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.266171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.266203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.266413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.266444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.266712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.266743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.266961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.266993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.267194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.267235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.267443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.267474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.267746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.267777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.268077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.268111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.268245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.268278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.268480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.268511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.268757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.268790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.269037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.269068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.269262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.269294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.269581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.269624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.269830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.269861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.270078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.270109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.270372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.270405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.270545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.270576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.270705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.270737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.271008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.271040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.271310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.271342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.271535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.271566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.271767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.271798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.271996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.272027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.272286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.272320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.272542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.272574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.272697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.272728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.273015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.273047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.273249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.273281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.844 qpair failed and we were unable to recover it. 00:28:05.844 [2024-07-15 08:03:50.273464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.844 [2024-07-15 08:03:50.273496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.273713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.273745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.274062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.274093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.274370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.274405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.274601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.274634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.274865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.274897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.275142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.275174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.275456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.275488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.275736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.275767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.276042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.276073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.276368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.276401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.276589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.276627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.276823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.276855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.277131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.277163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.277400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.277432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.277570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.277602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.277811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.277842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.278057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.278088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.278284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.278316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.278571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.278603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.278735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.278766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.279022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.279053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.279322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.279355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.279499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.279531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.279777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.279808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.280084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.280116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.280396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.280430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.280707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.280738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.280949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.280982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.281185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.281216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.281461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.281493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.281641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.281673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.281822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.281853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.282061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.282093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.282388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.282421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.282612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.282644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.282911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.282943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.283191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.283222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.283439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.283471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.283754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.283786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.845 [2024-07-15 08:03:50.283919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-07-15 08:03:50.283950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.845 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.284142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.284174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.284347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.284378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.284560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.284592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.284719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.284750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.285003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.285033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.285151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.285182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.285382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.285415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.285603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.285635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.285832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.285863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.286061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.286091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.286355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.286387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.286589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.286626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.286910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.286940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.287156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.287187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.287395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.287427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.287623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.287654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.287801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.287832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.288056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.288088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.288272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.288304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.288440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.288472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.288762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.288794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.289005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.289036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.289351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.289383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.289576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.289607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.289867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.289898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.290092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.290123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.290398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.290431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.290680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.290712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.291035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.291068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.291327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.291359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.291576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.291608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.291804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.291836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.292104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.292135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.292411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.292444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.292640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.292671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.292868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.292900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.293146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.293176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.293370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.293402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.293597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.293634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.293788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-07-15 08:03:50.293820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.846 qpair failed and we were unable to recover it. 00:28:05.846 [2024-07-15 08:03:50.294102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.294133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.294270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.294303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.294493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.294524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.294723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.294754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.294978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.295010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.295144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.295176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.295480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.295512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.295715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.295746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.295957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.295988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.296298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.296330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.296528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.296559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.296808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.296840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.297071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.297111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.297307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.297340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.297548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.297580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.297772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.297803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.298094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.298125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.298256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.298289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.298481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.298512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.298646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.298678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.298993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.299024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.299286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.299318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.299570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.299603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.299811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.299842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.300149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.300182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.300341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.300380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.300648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.300681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.300954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.300986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.301196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.301239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.301485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.301517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.301789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.301820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.301999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.302031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.302284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.302317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.302513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.302545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.302802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.302833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.303102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.303133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.303407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.303440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.303647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.303678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.303950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.303981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.304176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.304209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.847 qpair failed and we were unable to recover it. 00:28:05.847 [2024-07-15 08:03:50.304423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-07-15 08:03:50.304456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.304605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.304638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.304940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.304972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.305243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.305275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.305402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.305434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.305585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.305618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.305876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.305907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.306121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.306152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.306405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.306437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.306650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.306681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.306878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.306910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.307109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.307140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.307393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.307441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.307657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.307688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.307919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.307951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.308218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.308261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.308403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.308434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.308633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.308665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.308885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.308916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.309165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.309196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.309402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.309435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.309704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.309739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.309971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.310002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.310221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.310265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.310468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.310499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.310773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.310812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.311075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.311107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.311297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.311330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.311516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.311547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.311692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.311723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.311999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.312030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.312235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.312268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.312412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.312443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.312711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.848 [2024-07-15 08:03:50.312742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.848 qpair failed and we were unable to recover it. 00:28:05.848 [2024-07-15 08:03:50.312908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.312939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.313076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.313106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.313299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.313332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.313603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.313638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.313837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.313868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.314138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.314170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.314399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.314432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.314570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.314601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.314803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.314834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.315019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.315050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.315247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.315279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.315464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.315495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.315739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.315770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.315957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.315989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.316248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.316280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.316474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.316505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.316647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.316678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.316874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.316905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.317139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.317177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.317344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.317377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.317518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.317549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.317751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.317782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.318008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.318040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.318255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.318287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.318430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.318460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.318680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.318711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.318950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.318981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.319175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.319207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.319360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.319392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.319573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.319604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.319748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.319779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.319981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.320017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.320220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.320260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.320459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.320491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.320638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.320669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.320867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.320899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.321024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.321055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.321266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.321299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.321432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.321463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.321671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.849 [2024-07-15 08:03:50.321702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.849 qpair failed and we were unable to recover it. 00:28:05.849 [2024-07-15 08:03:50.321910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.321941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.322142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.322173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.322348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.322380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.322605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.322636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.322892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.322923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.323236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.323269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.323462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.323493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.323717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.323748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.323940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.323971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.324248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.324281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.324481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.324513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.324702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.324733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.324936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.324967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.325150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.325181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.325307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.325339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.325538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.325569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.325717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.325748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.325958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.325990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.326123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.326165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.326373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.326407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.326587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.326618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.326812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.326845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.327118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.327149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.327358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.327391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.327596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.327631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.327771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.327802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.328069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.328100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.328297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.328352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.328559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.328590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.328778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.328809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.329053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.329085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.329274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.329306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.329509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.329541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.329789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.329821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.330046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.330077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.330261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.330293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.330563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.330595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.330839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.330870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.331082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.331113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.331313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.331346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.850 [2024-07-15 08:03:50.331561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.850 [2024-07-15 08:03:50.331592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.850 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.331839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.331870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.332094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.332125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.332259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.332291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.332424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.332456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.332725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.332762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.333034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.333065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.333374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.333407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.333606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.333636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.333768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.333800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.334018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.334049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.334247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.334278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.334476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.334508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.334652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.334683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.334938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.334970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.335245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.335278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.335418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.335450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.335584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.335615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.335750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.335781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.336011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.336043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.336187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.336219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.336355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.336387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.336570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.336601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.336742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.336773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.336971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.337002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.337238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.337271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.337516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.337548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.337670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.337702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.337841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.337873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.338081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.338111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.338325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.338358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.338500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.338532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.338655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.338687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.338873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.338905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.339093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.339124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.339257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.339288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.339501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.339532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.339666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.339697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.339891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.339923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.340065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.340097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.340235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.340266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.851 qpair failed and we were unable to recover it. 00:28:05.851 [2024-07-15 08:03:50.340458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.851 [2024-07-15 08:03:50.340489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.340620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.340652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.340851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.340883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.341069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.341101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.341259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.341293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.341459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.341498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.341695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.341726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.341986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.342017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.342163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.342194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.342408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.342443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.342599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.342631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.342861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.342892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.343036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.343067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.343314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.343345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.343528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.343559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.343807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.343837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.344050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.344082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.344375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.344407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.344653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.344690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.344837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.344868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.345103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.345134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.345328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.345360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.345557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.345588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.345788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.345819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.346017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.346048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.346353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.346385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.346568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.346599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.346849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.346879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.347124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.347155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.347406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.347437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.347629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.347661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.347776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.347807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.348104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.348136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.348341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.348373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.348503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.348534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.852 [2024-07-15 08:03:50.348737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.852 [2024-07-15 08:03:50.348767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.852 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.349027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.349058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.349202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.349244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.349488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.349519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.349657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.349689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.349953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.349985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.350167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.350198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.350366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.350398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.350702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.350734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.351003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.351033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.351243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.351279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.351471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.351502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.351777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.351808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.352063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.352094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.352353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.352385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.352528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.352560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.352761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.352792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.353008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.353038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.353290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.353323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.353525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.353557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.353790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.353821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.354088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.354119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.354258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.354292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.354496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.354528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.354666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.354698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.354929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.354959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.355205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.355246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.355515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.355546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.355704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.355735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.356029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.356060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.356250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.356282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.356505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.356537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.356829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.356860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.357082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.357113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.357348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.357381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.357590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.357620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.357800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.357831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.358030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.358062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.358201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.358262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.358538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.358570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.853 qpair failed and we were unable to recover it. 00:28:05.853 [2024-07-15 08:03:50.358758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.853 [2024-07-15 08:03:50.358789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.359031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.359062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.359244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.359276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.359464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.359495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.359629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.359660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.359878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.359908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.360153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.360184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.360467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.360501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.360766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.360797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.361031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.361062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.361247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.361290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.361443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.361474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.361746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.361777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.362029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.362060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.362348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.362380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.362648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.362679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.362981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.363012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.363253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.363284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.363434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.363465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.363709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.363740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.363884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.363915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.364178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.364210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.364402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.364434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.364580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.364610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.364806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.364838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.365019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.365051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.365202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.365244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.365377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.365407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.365622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.365653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.365858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.365890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.366158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.366189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.366432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.366464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.366613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.366644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.366846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.366877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.367065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.367096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.367289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.367322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.367501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.367532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.367679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.367710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.367997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.368029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.368243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.368275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.854 [2024-07-15 08:03:50.368537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.854 [2024-07-15 08:03:50.368569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.854 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.368761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.368792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.368986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.369018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.369207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.369246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.369437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.369468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.369736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.369767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.369970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.370001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.370249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.370281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.370529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.370559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.370753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.370785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.370964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.371000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.371189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.371220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.371431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.371462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.371641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.371672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.371949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.371980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.372162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.372193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.372512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.372553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.372753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.372785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.373031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.373063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.373195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.373239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.373458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.373490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.373746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.373778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.374037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.374068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.374191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.374223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.374537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.374569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.374720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.374751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.374975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.375006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.375147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.375178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.375390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.375423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.375623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.375654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.375853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.375883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.376076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.376107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.376383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.376416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.376665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.376696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.376901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.376932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.377129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.377160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.377355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.377387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.377525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.377562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.377704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.377736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.378031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.378063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.855 qpair failed and we were unable to recover it. 00:28:05.855 [2024-07-15 08:03:50.378285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.855 [2024-07-15 08:03:50.378317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.378515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.378546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.378789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.378821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.379075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.379107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.379250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.379282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.379433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.379465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.379684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.379715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.379903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.379937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.380069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.380100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.380334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.380366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.380576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.380607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.380903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.380935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.381198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.381237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.381449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.381481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.381727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.381758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.382093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.382124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.382308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.382340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.382469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.382501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.382770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.382801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.383069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.383101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.383304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.383336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.383597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.383630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.383772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.383803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.383939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.383969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.384252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.384287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.384435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.384468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.384626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.384657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.384790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.384821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.385096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.385128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.385369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.385401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.385676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.385707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.385994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.386025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.386308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.386340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.386610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.386642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.386918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.386950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.856 [2024-07-15 08:03:50.387192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.856 [2024-07-15 08:03:50.387223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.856 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.387478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.387510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.387671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.387702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.387922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.387968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.388180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.388213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.388427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.388460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.388603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.388635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.388839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.388871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.389092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.389135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.389420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.389452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.389652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.389683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.389835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.389866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.390065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.390098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.390320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.390353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.390567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.390600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.390795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.390827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.391041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.391079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.391358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.391391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.391515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.391547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.391749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.391781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.391966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.391996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.392175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.392206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.392345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.392378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.392579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.392611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.392819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.392850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.393046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.393077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.393373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.393406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.393552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.393585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.393779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.393811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.394075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.394108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.394353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.394386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.394581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.394613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.394738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.394769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.394958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.394990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.395204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.395243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.395438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.395470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.395682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.395715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.395837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.395869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.396140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.396171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.396407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.396440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.857 qpair failed and we were unable to recover it. 00:28:05.857 [2024-07-15 08:03:50.396624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.857 [2024-07-15 08:03:50.396656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.396932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.396964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.397163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.397194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.397357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.397398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.397555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.397586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.397776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.397807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.397991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.398021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.398286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.398318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.398465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.398497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.398650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.398680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.399049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.399080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.399398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.399430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.399636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.399666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.399834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.399866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.400045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.400076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.400288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.400321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.400508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.400539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.400671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.400701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.400827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.400857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.401037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.401068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.401271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.401303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.401487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.401518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.401787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.401817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.401935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.401967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.402099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.402130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.402341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.402375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.402670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.402701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.402836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.402869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.403133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.403163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.403292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.403323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.403529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.403562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.403671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.403702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.403978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.404009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.404207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.404251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.404437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.404468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.404710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.404742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.405040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.405070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.405315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.405349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.405540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.405572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.405792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.858 [2024-07-15 08:03:50.405822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.858 qpair failed and we were unable to recover it. 00:28:05.858 [2024-07-15 08:03:50.405963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.405995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.406289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.406322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.406508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.406538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.406673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.406709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.406982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.407014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.407140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.407171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.407401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.407433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.407615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.407645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.407841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.407872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.408011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.408041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.408174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.408205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.408336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.408368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.408494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.408526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.408667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.408698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.408881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.408912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.409030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.409061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.409193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.409223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.409357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.409388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.409522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.409553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.409739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.409770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.409903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.409934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.410050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.410082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.410222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.410264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.410396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.410427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.410611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.410642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.410839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.410871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.411049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.411080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.411215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.411257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.411448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.411479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.411627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.411659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.411780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.411812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.411927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.411958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.412089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.412121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.412252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.412285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.412436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.412468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.412599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.412630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.412751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.412782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.412978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.413011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.413134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.413164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.413366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.859 [2024-07-15 08:03:50.413398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.859 qpair failed and we were unable to recover it. 00:28:05.859 [2024-07-15 08:03:50.413517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.413548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.413745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.413776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.413900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.413931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.414139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.414176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.414389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.414421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.414604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.414634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.414778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.414809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.414934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.414970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.415110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.415142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.415346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.415378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.415626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.415657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.415832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.415864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.415994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.416026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.416156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.416187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.416391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.416423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.416556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.416588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.416714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.416745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.416866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.416896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.417033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.417064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.417274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.417308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.417428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.417459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.417581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.417613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.417822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.417853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.418054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.418085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.418309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.418340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.418470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.418502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.418689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.418719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.418900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.418932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.419120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.419151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.419359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.419391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.419579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.419611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.419754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.419785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.419908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.419938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.420058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.420088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.420212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.420250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.420383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.420414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.420601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.420632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.420751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.420781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.420967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.420998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.860 [2024-07-15 08:03:50.421132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.860 [2024-07-15 08:03:50.421162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.860 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.421349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.421382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.421503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.421533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.421660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.421691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.421872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.421907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.422086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.422115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.422294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.422325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.422446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.422476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.422602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.422632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.422777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.422807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.422929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.422960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.423081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.423111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.423240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.423271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.423384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.423415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.423547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.423578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.423693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.423724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.423909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.423940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.424075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.424104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.424220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.424260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.424466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.424497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.424675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.424704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.424892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.424923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.425031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.425061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.425270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.425304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.425432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.425462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.425577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.425607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.425754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.425785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.425908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.425938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.426061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.426091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.426204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.426243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.426439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.426469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.426610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.426641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.426855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.426886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.427003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.427033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.861 [2024-07-15 08:03:50.427160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.861 [2024-07-15 08:03:50.427190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.861 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.427341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.427373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.427481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.427511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.427610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.427639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.427760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.427791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.427969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.427999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.428123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.428154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.428346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.428378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.428492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.428521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.428716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.428746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.428872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.428908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.429021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.429051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.429175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.429206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.429414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.429447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.429562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.429592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.429793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.429823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.429966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.429996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.430121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.430153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.430322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.430353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.430481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.430512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.430693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.430725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.430860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.430890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.431032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.431063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.431198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.431237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.431426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.431457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.431642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.431672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.431854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.431883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.432065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.432095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.432338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.432371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.432566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.432598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.432716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.432747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.432936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.432967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.433165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.433195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.433326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.433362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.433586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.433618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.433812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.433844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.433994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.434025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.434236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.434269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.434391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.434423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.434677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.434709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.862 qpair failed and we were unable to recover it. 00:28:05.862 [2024-07-15 08:03:50.434843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.862 [2024-07-15 08:03:50.434874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.435051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.435082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.435300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.435332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.435514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.435545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.435657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.435688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.435811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.435843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.436041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.436074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.436192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.436223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.436434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.436465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.436586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.436618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.436753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.436789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.436982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.437014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.437219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.437258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.437510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.437542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.437692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.437724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.437923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.437954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.438148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.438180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.438326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.438358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.438483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.438514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.438648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.438679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.438923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.438955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.439078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.439109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.439235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.439271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.439407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.439439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.439568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.439599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.439731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.439763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.439890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.439921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.440046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.440077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.440196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.440237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.440375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.440406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.440618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.440650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.440784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.440816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.440936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.440968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.441083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.441115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.441327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.441359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.441481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.441513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.441704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.441735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.441999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.442030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.442167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.442198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.863 [2024-07-15 08:03:50.442322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.863 [2024-07-15 08:03:50.442355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.863 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.442470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.442502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.442751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.442783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.442971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.443002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.443116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.443148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.443326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.443358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.443576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.443608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.443724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.443756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.443946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.443978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.444168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.444199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.444339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.444370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.444495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.444538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.444668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.444700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.444933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.444964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.445147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.445179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.445331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.445363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.445548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.445580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.445693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.445725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.445867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.445899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.446034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.446066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.446184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.446215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.446348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.446380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.446481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.446512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.446635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.446667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.446861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.446892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.447027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.447059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.447171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.447202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.447338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.447370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.447552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.447583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.447782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.447814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.447956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.447987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.448098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.448129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.448257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.448291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.448477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.448508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.448622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.448653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.448791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.448823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.448962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.448993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.449107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.449138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.449343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.449375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.864 qpair failed and we were unable to recover it. 00:28:05.864 [2024-07-15 08:03:50.449557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.864 [2024-07-15 08:03:50.449589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.449791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.449822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.450017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.450048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.450164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.450195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.450405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.450450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.450576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.450608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.450801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.450833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.451132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.451163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.451359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.451392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.451518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.451550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.451677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.451709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.451911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.451942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.452207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.452249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.452392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.452424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.452676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.452707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.452824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.452856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.453052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.453084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.453267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.453300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.453434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.453465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.453622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.453653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.453789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.453821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.453943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.453974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.454158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.454190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.454314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.454346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.454571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.454602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.454718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.454749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.454871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.454907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.455035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.455065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.455195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.455236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.455395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.455426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.455617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.455647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.455779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.455809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.455931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.455961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.456167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.456198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.456338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.865 [2024-07-15 08:03:50.456368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.865 qpair failed and we were unable to recover it. 00:28:05.865 [2024-07-15 08:03:50.456485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.456516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.456642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.456673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.456805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.456836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.456966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.456997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.457119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.457157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.457290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.457322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.457570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.457601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.457808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.457839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.458023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.458054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.458192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.458223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.458370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.458402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.458604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.458634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.458769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.458800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.458920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.458950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.459184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.459215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.459353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.459385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.459509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.459541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.459812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.459843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.460070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.460102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.460237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.460270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.460405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.460436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.460659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.460690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.460824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.460855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.461041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.461071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.461267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.461299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.461416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.461447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.461646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.461678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.461899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.461930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.462058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.462090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.462205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.462248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.462379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.462410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.462645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.462687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.462880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.462913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.463042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.463074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.463213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.463257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.463378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.463409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.463535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.463568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.463678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.463709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.463898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.463929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.464054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.866 [2024-07-15 08:03:50.464086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.866 qpair failed and we were unable to recover it. 00:28:05.866 [2024-07-15 08:03:50.464210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.464250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.464366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.464396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.464513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.464543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.464743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.464775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.464890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.464929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.465132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.465164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.465298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.465330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.465523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.465556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.465681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.465712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.465898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.465928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.466127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.466157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.466339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.466372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.466486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.466516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.466699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.466730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.466873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.466905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.467095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.467126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.467252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.467285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.467488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.467519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.467648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.467679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.467796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.467828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.468015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.468046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.468241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.468273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.468479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.468510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.468640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.468672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.468806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.468837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.468968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.468999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.469174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.469205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.469470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.469501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.469682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.469713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.469888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.469919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.470180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.470212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.470351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.470387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.470519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.470550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.470751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.470782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.470939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.470970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.471082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.471113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.471249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.471281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.471402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.471433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.471615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.471646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.471914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.867 [2024-07-15 08:03:50.471944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.867 qpair failed and we were unable to recover it. 00:28:05.867 [2024-07-15 08:03:50.472065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.472097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.472291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.472324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.472574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.472605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.472748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.472779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.472910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.472940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.473143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.473174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.473308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.473340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.473538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.473569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.473752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.473784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.473931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.473962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.474092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.474123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.474368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.474400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.474587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.474618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.474890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.474921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.475060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.475091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.475215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.475258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.475436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.475467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.475582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.475614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.475740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.475771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.475897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.475928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.476111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.476141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.476264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.476296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.476478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.476509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.476699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.476731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.476854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.476885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.477076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.477108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.477246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.477279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.477390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.477422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.477696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.477726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.477947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.477978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.478114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.478145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.478263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.478301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.478555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.478585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.478712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.478743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.478857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.478888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.479005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.479036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.479215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.479266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.479412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.479444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.479584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.479615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.479798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.868 [2024-07-15 08:03:50.479828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.868 qpair failed and we were unable to recover it. 00:28:05.868 [2024-07-15 08:03:50.479958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.479989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.480113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.480143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.480321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.480353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.480477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.480507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.480624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.480655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.480768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.480800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.480925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.480956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.481148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.481180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.481310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.481343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.481479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.481510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.481704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.481735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.481877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.481908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.482088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.482119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.482261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.482293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.482426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.482457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.482647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.482678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.482805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.482837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.482954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.482986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.483131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.483162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.483280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.483312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.483498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.483529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.483650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.483681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.483805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.483836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.483958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.483989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.484116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.484147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.484260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.484293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.484531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.484562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.484751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.484782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.484908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.484939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.485071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.485102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.485236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.485268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.485448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.485484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.485606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.485637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.485815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.485846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.485977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.486007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.486185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.486215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.486407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.486438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.486632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.486663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.869 [2024-07-15 08:03:50.486795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.869 [2024-07-15 08:03:50.486827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.869 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.487019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.487050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.487181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.487212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.487419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.487451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.487571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.487601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.487873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.487904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.488028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.488059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.488209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.488254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.488384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.488416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.488614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.488646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.488777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.488808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.488934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.488965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.489145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.489175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.489366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.489398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.489642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.489674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.489866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.489897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.490083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.490115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.490246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.490278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.490397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.490428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.490619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.490649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.490783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.490815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.490930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.490961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.491157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.491188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.491310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.491343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.491484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.491515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.491628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.491660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.491872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.491903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.492033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.492065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.492329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.492362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.492579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.492609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.492735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.492767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.492892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.492924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.493117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.493148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.493399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.493436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.493704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.493737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.493870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.493901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.494099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.494130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.870 qpair failed and we were unable to recover it. 00:28:05.870 [2024-07-15 08:03:50.494325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.870 [2024-07-15 08:03:50.494357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.494475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.494507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.494630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.494661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.494909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.494941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.495087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.495118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.495263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.495295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.495424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.495455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.495581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.495613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.495766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.495797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.495925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.495956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.496075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.496109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.496306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.496339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.496627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.496659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.496847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.496878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.497015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.497046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.497250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.497284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.497422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.497454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.497715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.497747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.497855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.497886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.498000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.498035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.498175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.498206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.498339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.498371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.498501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.498534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.498668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.498699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.498821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.498852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.498978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.499010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.499166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.499198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.499334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.499366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.499499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.499531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.499678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.499710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.499842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.499873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.499998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.500030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.500150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.500182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.500329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.500361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.500554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.500585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.500710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.500741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.500860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.500895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.501081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.501112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.501222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.501263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.501382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.501414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.871 [2024-07-15 08:03:50.501526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.871 [2024-07-15 08:03:50.501557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.871 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.501761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.501791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.501912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.501945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.502070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.502102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.502222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.502264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.502383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.502414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.502537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.502568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.502687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.502720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.502853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.502885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.503000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.503032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.503286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.503320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.503447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.503479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.503632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.503663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.503801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.503832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.503952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.503982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.504111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.504142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.504263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.504296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.504414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.504445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.504635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.504665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.504781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.504812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.504937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.504969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.505093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.505125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.505244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.505276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.505414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.505446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.505645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.505676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.505877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.505908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.506045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.506077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.506196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.506236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.506359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.506390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.506508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.506540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.506666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.506697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.506897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.506928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.507049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.507081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.507261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.507294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.507422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.507455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.507565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.507597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.507712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.507750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.507870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.507903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.508014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.508045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.508169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.508201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.508346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.872 [2024-07-15 08:03:50.508377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.872 qpair failed and we were unable to recover it. 00:28:05.872 [2024-07-15 08:03:50.508500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.508532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.508650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.508682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.508804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.508835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.508970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.509001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.509119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.509152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.509268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.509299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.509417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.509450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.509590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.509621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.509741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.509772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.509969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.510001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.510125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.510156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.510276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.510308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.510430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.510464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.510691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.510724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.510833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.510864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.510990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.511023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.511268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.511301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.511434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.511466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.511592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.511623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.511744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.511776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.511885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.511918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.512036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.512067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.512256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.512289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.512416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.512448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.512562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.512593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.512723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.512756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.512876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.512907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.513118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.513151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.513287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.513319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.513454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.513485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.513609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.513641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.513755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.513787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.513973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.514005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.514193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.514231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.514418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.514450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.514561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.514597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.514790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.514822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.515014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.515046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.515184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.515216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.515414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.873 [2024-07-15 08:03:50.515446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.873 qpair failed and we were unable to recover it. 00:28:05.873 [2024-07-15 08:03:50.515581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.515613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.515759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.515790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.515907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.515939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.516071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.516102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.516220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.516263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.516455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.516487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.516696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.516728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.516913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.516944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.517069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.517100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.517258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.517292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.517488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.517519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.517652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.517682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.517825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.517857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.517968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.518000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.518116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.518147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.518279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.518311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.518441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.518472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.518604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.518636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.518777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.518809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.518966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.518997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.519119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.519150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.519272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.519305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.519447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.519491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.519636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.519670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.519805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.519838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.519963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.519996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.520181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.520214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.520349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.520381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.520524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.520556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.520677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.520709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.520913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.520944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.521198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.521243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.521430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.521462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.521655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.521688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.521805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.521838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.874 [2024-07-15 08:03:50.521979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.874 [2024-07-15 08:03:50.522017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.874 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.522149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.522182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.522615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.522693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.522926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.522961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.523210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.523253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.523446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.523477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.523749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.523781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.523911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.523944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.524075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.524107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.524248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.524281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.524414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.524447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.524567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.524598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.524721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.524753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.524885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.524917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.525116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.525149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.525339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.525372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.525617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.525649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.525762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.525794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.525920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.525953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.526132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.526164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.526360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.526393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.526511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.526543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.526745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.526776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.526912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.526943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.527092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.527124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.527250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.527284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.527465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.527498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.527730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.527779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.528009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.528051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.528183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.528216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.528378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.528410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.528559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.528590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.528775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.528806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.528994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.529026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.529250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.529283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.529414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.529445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.529628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.529661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.529865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.529897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.530079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.530111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.530302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.530335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.875 qpair failed and we were unable to recover it. 00:28:05.875 [2024-07-15 08:03:50.530519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.875 [2024-07-15 08:03:50.530556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.530680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.530711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.530845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.530878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.531007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.531039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.531162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.531193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.531385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.531421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.531558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.531591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.531727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.531760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.531972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.532004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.532131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.532164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.532312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.532352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.532487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.532520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.532666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.532699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.532881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.532914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.533034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.533068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.533189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.533221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.533472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.533504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.533613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.533645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.533766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.533799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.533993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.534026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.534161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.534194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.534330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.534368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.534506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.534538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.534664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.534696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.534883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.534915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.535041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.535073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.535205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.535249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.535434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.535473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.535601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.535634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.535770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.535802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.535926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.535957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.536101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.536133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.536261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.536295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.536512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.536543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.536736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.536768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.536908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.536939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.537062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.537097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.537221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.537262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.537384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.537416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.537534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.537566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.876 [2024-07-15 08:03:50.537746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.876 [2024-07-15 08:03:50.537779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.876 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.537920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.537952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.538080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.538112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.538238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.538272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.538386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.538417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.538594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.538626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.538825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.538857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.538979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.539011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.539195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.539235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.539362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.539394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.539581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.539613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.539751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.539783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.539914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.539946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.540138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.540170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.540379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.540417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.540545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.540577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.540694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.540726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.540867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.540899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.541024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.541064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.541194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.541236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.541350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.541382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.541493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.541526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.541775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.541806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.541921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.541953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.542204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.542246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.542368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.542401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.542534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.542565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.542671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.542703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbaed0 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.542901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.542936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.543064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.543096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.543280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.543314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.543447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.543480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.543613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.543643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.543843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.543875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.544008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.544039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.544156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.544187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.544324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.544356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.544477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.544509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.544626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.544658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.544786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.544818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.544946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.877 [2024-07-15 08:03:50.544978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.877 qpair failed and we were unable to recover it. 00:28:05.877 [2024-07-15 08:03:50.545145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.545183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.545315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.545348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.545487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.545519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.545652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.545683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.545877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.545909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.546027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.546058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.546237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.546270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.546448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.546480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.546671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.546703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.546820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.546851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.546976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.547007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.547124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.547157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.547342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.547376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.547510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.547542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.547750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.547792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.547907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.547939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.548110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.548142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.548341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.548373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.548490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.548522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.548643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.548674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.548817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.548848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.548980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.549013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.549195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.549236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.549347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.549378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.549576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.549608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.549729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.549762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.549873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.549906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.550097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.550132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.550255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.550287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.550465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.550495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.550622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.550653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.550768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.550799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.550933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.550963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.551077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.551107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.551308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.551342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.878 qpair failed and we were unable to recover it. 00:28:05.878 [2024-07-15 08:03:50.551454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.878 [2024-07-15 08:03:50.551484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.551596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.551626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.551770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.551801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.551926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.551956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.552082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.552113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.552244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.552280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.552508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.552539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.552653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.552683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.552799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.552830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.553016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.553046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.553245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.553276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.553466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.553496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.553633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.553665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.553784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.553814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.553953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.553984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.554096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.554126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.554255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.554286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.554400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.554432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.554548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.554579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.554763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.554794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.554924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.554956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.555137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.555168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.555319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.555352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.555534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.555564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.555675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.555705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.555904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.555935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.556211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.556253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.556455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.556485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.556685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.556716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.556913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.556943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.557077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.557107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.557322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.557353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e28000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.557499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.557537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.557794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.557826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.558025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.558057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.558319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.558353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.558502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.558534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.558646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.558678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.558885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.558916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.559024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.559055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.879 qpair failed and we were unable to recover it. 00:28:05.879 [2024-07-15 08:03:50.559184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.879 [2024-07-15 08:03:50.559215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.559395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.559428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.559566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.559598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.559783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.559814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.559942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.559974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.560157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.560195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.560351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.560386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.560507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.560538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.560742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.560773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.560968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.560998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.561183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.561214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.561360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.561392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.561587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.561618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.561744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.561776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.561922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.561953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.562160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.562191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.562341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.562373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.562524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.562554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.562673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.562704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.563028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.563061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.563258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.563290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.563481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.563512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.563644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.563675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.563932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.563963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.564240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.564272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.564424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.564455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.564590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.564621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.564809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.564840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.565054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.565085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.565285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.565318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.565556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.565588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.565774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.565804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.566009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.566041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.566243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.566275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.566472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.566504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.566647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.566678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.566806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.566838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.566961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.566992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.567188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.567219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.567442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.880 [2024-07-15 08:03:50.567473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.880 qpair failed and we were unable to recover it. 00:28:05.880 [2024-07-15 08:03:50.567660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.567691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.567812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.567843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.568024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.568056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.568188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.568219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.568359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.568390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.568582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.568618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.568769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.568800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.568973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.569005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.569189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.569220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.569341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.569372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.569589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.569620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.569754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.569785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.569903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.569934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.570072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.570103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.570255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.570288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.570484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.570515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.570650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.570681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.570822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.570853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.571032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.571063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.571244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.571276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.571458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.571490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.571677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.571709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.571937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.571969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.572134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.572165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.572427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.572459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.572583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.572614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.572744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.572775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.573083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.573114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.573311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.573344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.573478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.573509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.573654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.573685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.573883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.573914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e38000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.574056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.574094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.574356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.574390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.574599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.574631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:05.881 [2024-07-15 08:03:50.574754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.881 [2024-07-15 08:03:50.574785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:05.881 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.574985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.575017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.575144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.575176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.575360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.575393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.575527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.575559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.575692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.575723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.575943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.575974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.576155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.576186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.576366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.576398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.576529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.576561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.576761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.576798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.577071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.577102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.577325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.577357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.577605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.577637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.577898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.577930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.578148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.578179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.578388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.578421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.578609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.578641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.578843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.578875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.579069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.579101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.579288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.579320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.579475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.579507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.579645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.579676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.579788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.579820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.580008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.580040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.580223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.580264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.580525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.580557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.580697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.580729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.580974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.581006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.581205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.581247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.581448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.581480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:06.149 [2024-07-15 08:03:50.581663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.581694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:06.149 [2024-07-15 08:03:50.581938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.581971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 [2024-07-15 08:03:50.582183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.582214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:06.149 [2024-07-15 08:03:50.582426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.582459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.149 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:06.149 [2024-07-15 08:03:50.582707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.149 [2024-07-15 08:03:50.582738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.149 qpair failed and we were unable to recover it. 00:28:06.150 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:06.150 [2024-07-15 08:03:50.583021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.583053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.583174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.583205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.583368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.583404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.583539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.583570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.583730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.583760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.584027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.584058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.584362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.584393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.584538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.584569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.584763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.584793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.585084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.585114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.585339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.585369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.585505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.585537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.585723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.585753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.586112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.586144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.586361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.586392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.586639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.586672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.586831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.586863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.587151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.587181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.587385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.587417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.587561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.587591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.587886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.587916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.588119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.588149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.588357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.588389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.588529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.588559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.588736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.588765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.589049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.589080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.589408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.589450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.589700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.589731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.589883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.589913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.590187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.590218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.590405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.590435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.590585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.590616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.590821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.590851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.591058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.591087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.591213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.591254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.591520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.591550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.591795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.591824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.592084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.592115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.592358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.150 [2024-07-15 08:03:50.592392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.150 qpair failed and we were unable to recover it. 00:28:06.150 [2024-07-15 08:03:50.592551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.592587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.592782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.592813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.593016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.593045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.593325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.593355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.593498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.593529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.593727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.593758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.593966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.593998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.594212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.594256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.594458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.594489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.594616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.594646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.594794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.594825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.595019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.595049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.595266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.595298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.595436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.595467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.595693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.595723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.595953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.595983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.596127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.596157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.596298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.596330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.596481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.596513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.596641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.596671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.596919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.596949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.597219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.597257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.597409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.597439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.597627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.597656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.597930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.597959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.598223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.598260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.598502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.598533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.598750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.598780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.598974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.599004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.599223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.599271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.599413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.599442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.599586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.599616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.599744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.599774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.599996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.600026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.600284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.600315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.600458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.600488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.600744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.600774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.600996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.601027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.601159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.601190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.151 [2024-07-15 08:03:50.601436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.151 [2024-07-15 08:03:50.601467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.151 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.601669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.601704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.601988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.602018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.602311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.602343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.602475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.602504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.602625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.602654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.602797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.602828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.603044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.603074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.603327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.603359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.603495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.603527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.603682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.603713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.603923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.603953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.604147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.604177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.604359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.604390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.604655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.604685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.604966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.604996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.605194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.605235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.605384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.605415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.605559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.605589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.605718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.605748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.605999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.606028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.606164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.606194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.606418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.606448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.606571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.606600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.606813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.606843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.607092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.607122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.607268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.607299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.607450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.607480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.607618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.607649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.607952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.607985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.608258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.608290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.608447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.608479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.608676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.608706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.608946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.608975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.609190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.609221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.609430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.609461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.609587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.609619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.609804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.609834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.610023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.610053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.152 [2024-07-15 08:03:50.610248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.152 [2024-07-15 08:03:50.610280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.152 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.610507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.610537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.610734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.610774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.611006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.611036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.611219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.611260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.611414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.611444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.611689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.611719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.611857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.611887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.612076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.612106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.612364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.612395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.612553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.612585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.612734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.612764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.612981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.613011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.613238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.613269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.613466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.613496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.613689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.613720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.613991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.614022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.614204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.614242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.614362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.614393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.614547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.614577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.614772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.614802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.614963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.614994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.615255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.615286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.615418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.615448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.615638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.615668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.615810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.615841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.615993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.616024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.616245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.616275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.616547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.616578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.616760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.616791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.616936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.616967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.617155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.617185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.617413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.617443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.617603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.153 [2024-07-15 08:03:50.617633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.153 qpair failed and we were unable to recover it. 00:28:06.153 [2024-07-15 08:03:50.617839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.617869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.618147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.618177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.618375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.618407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:06.154 [2024-07-15 08:03:50.618635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.618667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.618813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:06.154 [2024-07-15 08:03:50.618844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.619030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.619060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.154 [2024-07-15 08:03:50.619306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.619339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:06.154 [2024-07-15 08:03:50.619522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.619554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.619696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.619726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.619951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.619981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.620183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.620214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.620377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.620408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.620546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.620576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.620777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.620806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.621073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.621103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.621241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.621272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.621450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.621480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.621724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.621753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.622058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.622087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.622360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.622391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.622545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.622575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.622770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.622800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.623017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.623047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.623276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.623306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.623453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.623484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.623664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.623693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.623910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.623940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.624190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.624221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.624382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.624412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.624614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.624644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.624776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.624807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.625037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.625067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.625309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.625339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.625488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.625523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.625673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.625703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.625938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.625968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.626186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.626217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.626427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.626457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.154 qpair failed and we were unable to recover it. 00:28:06.154 [2024-07-15 08:03:50.626675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.154 [2024-07-15 08:03:50.626704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.627058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.627089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.627219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.627261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.627457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.627487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.627700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.627730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.627933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.627963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.628153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.628182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.628337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.628368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.628560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.628591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.628792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.628822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.629070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.629100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.629317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.629348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.629545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.629575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.629775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.629806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.630050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.630080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.630327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.630358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.630501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.630532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.630725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.630755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.630973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.631004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.631255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.631287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.631429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.631460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.631586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.631616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.631907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.631940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.632185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.632216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.632470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.632501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.632651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.632682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.632834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.632864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.633110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.633142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.633423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.633456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.633611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.633641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.633784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.633814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.633961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.633992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.634238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.634271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.634467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.634498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.634686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.634717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.635089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.635129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.635353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.635385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.635538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.635569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.635716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.635748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.636055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.636088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.155 qpair failed and we were unable to recover it. 00:28:06.155 [2024-07-15 08:03:50.636340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.155 [2024-07-15 08:03:50.636372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.636556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.636586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.636791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.636821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.637067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.637098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.637244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.637275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.637428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.637459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.637600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.637630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.637766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.637795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.638013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.638044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.638349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.638381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.638570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.638601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.638850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.638879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 Malloc0 00:28:06.156 [2024-07-15 08:03:50.639150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.639181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.639418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.639449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.639585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.639618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.156 [2024-07-15 08:03:50.639756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.639786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:06.156 [2024-07-15 08:03:50.640083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.640113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.156 [2024-07-15 08:03:50.640349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.640397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.640549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.640579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:06.156 [2024-07-15 08:03:50.640795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.640825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.641070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.641106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.641335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.641367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.641547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.641578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.641778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.641808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.642103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.642134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.642326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.642357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.642485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.642515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.642760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.642791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.643053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.643083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.643272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.643302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.643455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.643485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.643683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.643714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.644042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.644072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.644272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.644303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.644463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.644493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.644651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.644681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.644958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.644988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.645177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.645207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.156 [2024-07-15 08:03:50.645418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.156 [2024-07-15 08:03:50.645448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.156 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.645604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.645634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.645769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.645799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.645995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.646025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.646219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.646260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.646455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.646479] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.157 [2024-07-15 08:03:50.646485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.646732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.646762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.647010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.647040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.647272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.647303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.647524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.647554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.647707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.647737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.647943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.647973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.648262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.648293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.648446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.648478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.648674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.648704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.649068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.649098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.649366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.649397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.649640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.649671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.649822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.649852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.650100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.650130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.650322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.650352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.650496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.650526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.650675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.650706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.650948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.650978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.651271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.651301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.651444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.651475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.651622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.651651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.651781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.651812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.652028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.652058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.652259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.652289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.652489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.652519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.652661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.652691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.653031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.653061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.653363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.653393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.653665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.653696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.157 [2024-07-15 08:03:50.653847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.157 [2024-07-15 08:03:50.653883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.157 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.654080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.654110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.654323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.654353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.654494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.654524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.654649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.654679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.654951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.654982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.655116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.655146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.158 [2024-07-15 08:03:50.655336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.655367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.655510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.655540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:06.158 [2024-07-15 08:03:50.655744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.655774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.158 [2024-07-15 08:03:50.656039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.656069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:06.158 [2024-07-15 08:03:50.656344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.656376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.656519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.656549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.656703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.656732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.656934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.656965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.657219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.657276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.657472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.657503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.657724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.657754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.658012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.658042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.658220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.658259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.658443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.658474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.658695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.658725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.658935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.658964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.659183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.659213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.659431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.659461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.659655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.659691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.659920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.659950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.660160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.660189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.660389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.660420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.660602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.660632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.660848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.660878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.661070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.661101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.661323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.661358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.661537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.661568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.661691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.661721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.662031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.662063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.662260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.662291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.662486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.662517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.158 [2024-07-15 08:03:50.662635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.158 [2024-07-15 08:03:50.662665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.158 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.662902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.662933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.159 [2024-07-15 08:03:50.663237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.663268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.663421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.663451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:06.159 [2024-07-15 08:03:50.663652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.663682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.663814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.663846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.159 [2024-07-15 08:03:50.664065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.664096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:06.159 [2024-07-15 08:03:50.664356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.664389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.664614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.664645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.664869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.664899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.665106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.665137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.665272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.665303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.665504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.665540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.665670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.665700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.665993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.666023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.666234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.666266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.666541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.666572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.666750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.666780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.666924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.666955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.667173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.667203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.667428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.667458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.667659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.667689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.667911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.667941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.668121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.668150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.668349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.668381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.668634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.668665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.668821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.668851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.669045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.669076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.669359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.669390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.669662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.669691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.669833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.669863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.670135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.670166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.670346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.670378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.670605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.670636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.670894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.670925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.671169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.671199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 [2024-07-15 08:03:50.671462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.671492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:06.159 [2024-07-15 08:03:50.671689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.159 [2024-07-15 08:03:50.671720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.159 qpair failed and we were unable to recover it. 00:28:06.159 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.160 [2024-07-15 08:03:50.672016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-07-15 08:03:50.672047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:06.160 [2024-07-15 08:03:50.672244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-07-15 08:03:50.672275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-07-15 08:03:50.672473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-07-15 08:03:50.672504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-07-15 08:03:50.672718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-07-15 08:03:50.672749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-07-15 08:03:50.672892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-07-15 08:03:50.672923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-07-15 08:03:50.673268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-07-15 08:03:50.673300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-07-15 08:03:50.673516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-07-15 08:03:50.673546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-07-15 08:03:50.673756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-07-15 08:03:50.673787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-07-15 08:03:50.674025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-07-15 08:03:50.674056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-07-15 08:03:50.674330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-07-15 08:03:50.674362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-07-15 08:03:50.674662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.160 [2024-07-15 08:03:50.674693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e30000b90 with addr=10.0.0.2, port=4420 00:28:06.160 [2024-07-15 08:03:50.674696] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-07-15 08:03:50.677110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.160 [2024-07-15 08:03:50.677302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.160 [2024-07-15 08:03:50.677357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.160 [2024-07-15 08:03:50.677381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.160 [2024-07-15 08:03:50.677402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.160 [2024-07-15 08:03:50.677453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.160 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:06.160 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.160 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:06.160 [2024-07-15 08:03:50.687029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.160 [2024-07-15 08:03:50.687143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.160 [2024-07-15 08:03:50.687186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.160 [2024-07-15 08:03:50.687208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.160 [2024-07-15 08:03:50.687247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.160 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.160 [2024-07-15 08:03:50.687294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 08:03:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3414262 00:28:06.160 [2024-07-15 08:03:50.697010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.160 [2024-07-15 08:03:50.697102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.160 [2024-07-15 08:03:50.697137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.160 [2024-07-15 08:03:50.697151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.160 [2024-07-15 08:03:50.697164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.160 [2024-07-15 08:03:50.697197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-07-15 08:03:50.706968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.160 [2024-07-15 08:03:50.707067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.160 [2024-07-15 08:03:50.707088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.160 [2024-07-15 08:03:50.707098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.160 [2024-07-15 08:03:50.707107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.160 [2024-07-15 08:03:50.707129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-07-15 08:03:50.716988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.160 [2024-07-15 08:03:50.717051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.160 [2024-07-15 08:03:50.717065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.160 [2024-07-15 08:03:50.717072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.160 [2024-07-15 08:03:50.717078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.160 [2024-07-15 08:03:50.717092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-07-15 08:03:50.727029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.160 [2024-07-15 08:03:50.727085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.160 [2024-07-15 08:03:50.727100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.160 [2024-07-15 08:03:50.727107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.160 [2024-07-15 08:03:50.727113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.160 [2024-07-15 08:03:50.727128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-07-15 08:03:50.737053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.160 [2024-07-15 08:03:50.737109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.160 [2024-07-15 08:03:50.737124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.160 [2024-07-15 08:03:50.737131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.160 [2024-07-15 08:03:50.737137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.160 [2024-07-15 08:03:50.737151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-07-15 08:03:50.747075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.160 [2024-07-15 08:03:50.747135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.160 [2024-07-15 08:03:50.747151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.160 [2024-07-15 08:03:50.747157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.160 [2024-07-15 08:03:50.747163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.160 [2024-07-15 08:03:50.747178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.160 [2024-07-15 08:03:50.757093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.160 [2024-07-15 08:03:50.757152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.160 [2024-07-15 08:03:50.757167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.160 [2024-07-15 08:03:50.757176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.160 [2024-07-15 08:03:50.757183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.160 [2024-07-15 08:03:50.757197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.160 qpair failed and we were unable to recover it. 00:28:06.161 [2024-07-15 08:03:50.767179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.161 [2024-07-15 08:03:50.767265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.161 [2024-07-15 08:03:50.767281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.161 [2024-07-15 08:03:50.767288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.161 [2024-07-15 08:03:50.767294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.161 [2024-07-15 08:03:50.767310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-07-15 08:03:50.777146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.161 [2024-07-15 08:03:50.777200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.161 [2024-07-15 08:03:50.777216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.161 [2024-07-15 08:03:50.777223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.161 [2024-07-15 08:03:50.777232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.161 [2024-07-15 08:03:50.777246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-07-15 08:03:50.787176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.161 [2024-07-15 08:03:50.787243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.161 [2024-07-15 08:03:50.787259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.161 [2024-07-15 08:03:50.787265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.161 [2024-07-15 08:03:50.787271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.161 [2024-07-15 08:03:50.787286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-07-15 08:03:50.797217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.161 [2024-07-15 08:03:50.797274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.161 [2024-07-15 08:03:50.797290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.161 [2024-07-15 08:03:50.797296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.161 [2024-07-15 08:03:50.797302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.161 [2024-07-15 08:03:50.797317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-07-15 08:03:50.807249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.161 [2024-07-15 08:03:50.807303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.161 [2024-07-15 08:03:50.807318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.161 [2024-07-15 08:03:50.807324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.161 [2024-07-15 08:03:50.807330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.161 [2024-07-15 08:03:50.807345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-07-15 08:03:50.817278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.161 [2024-07-15 08:03:50.817333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.161 [2024-07-15 08:03:50.817347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.161 [2024-07-15 08:03:50.817354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.161 [2024-07-15 08:03:50.817360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.161 [2024-07-15 08:03:50.817374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-07-15 08:03:50.827223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.161 [2024-07-15 08:03:50.827283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.161 [2024-07-15 08:03:50.827297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.161 [2024-07-15 08:03:50.827304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.161 [2024-07-15 08:03:50.827310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.161 [2024-07-15 08:03:50.827324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-07-15 08:03:50.837336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.161 [2024-07-15 08:03:50.837387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.161 [2024-07-15 08:03:50.837401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.161 [2024-07-15 08:03:50.837408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.161 [2024-07-15 08:03:50.837414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.161 [2024-07-15 08:03:50.837427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-07-15 08:03:50.847389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.161 [2024-07-15 08:03:50.847444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.161 [2024-07-15 08:03:50.847462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.161 [2024-07-15 08:03:50.847468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.161 [2024-07-15 08:03:50.847474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.161 [2024-07-15 08:03:50.847488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-07-15 08:03:50.857409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.161 [2024-07-15 08:03:50.857460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.161 [2024-07-15 08:03:50.857474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.161 [2024-07-15 08:03:50.857481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.161 [2024-07-15 08:03:50.857487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.161 [2024-07-15 08:03:50.857500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-07-15 08:03:50.867405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.161 [2024-07-15 08:03:50.867467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.161 [2024-07-15 08:03:50.867481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.161 [2024-07-15 08:03:50.867488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.161 [2024-07-15 08:03:50.867494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.161 [2024-07-15 08:03:50.867508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-07-15 08:03:50.877452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.161 [2024-07-15 08:03:50.877512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.161 [2024-07-15 08:03:50.877527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.161 [2024-07-15 08:03:50.877533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.161 [2024-07-15 08:03:50.877539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.161 [2024-07-15 08:03:50.877553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.161 [2024-07-15 08:03:50.887486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.161 [2024-07-15 08:03:50.887543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.161 [2024-07-15 08:03:50.887557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.161 [2024-07-15 08:03:50.887564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.161 [2024-07-15 08:03:50.887569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.161 [2024-07-15 08:03:50.887586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.161 qpair failed and we were unable to recover it. 00:28:06.422 [2024-07-15 08:03:50.897562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.422 [2024-07-15 08:03:50.897615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.422 [2024-07-15 08:03:50.897630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.422 [2024-07-15 08:03:50.897636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.422 [2024-07-15 08:03:50.897643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.422 [2024-07-15 08:03:50.897657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.422 qpair failed and we were unable to recover it. 00:28:06.422 [2024-07-15 08:03:50.907617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.422 [2024-07-15 08:03:50.907682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.422 [2024-07-15 08:03:50.907697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.422 [2024-07-15 08:03:50.907703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.422 [2024-07-15 08:03:50.907709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.422 [2024-07-15 08:03:50.907723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.422 qpair failed and we were unable to recover it. 00:28:06.422 [2024-07-15 08:03:50.917615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.422 [2024-07-15 08:03:50.917673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.422 [2024-07-15 08:03:50.917687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.422 [2024-07-15 08:03:50.917694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.422 [2024-07-15 08:03:50.917700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.422 [2024-07-15 08:03:50.917713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.422 qpair failed and we were unable to recover it. 00:28:06.422 [2024-07-15 08:03:50.927648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.422 [2024-07-15 08:03:50.927700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.422 [2024-07-15 08:03:50.927714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.422 [2024-07-15 08:03:50.927720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.422 [2024-07-15 08:03:50.927726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.422 [2024-07-15 08:03:50.927739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.422 qpair failed and we were unable to recover it. 00:28:06.422 [2024-07-15 08:03:50.937714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.422 [2024-07-15 08:03:50.937772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.422 [2024-07-15 08:03:50.937789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.422 [2024-07-15 08:03:50.937796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.422 [2024-07-15 08:03:50.937802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.422 [2024-07-15 08:03:50.937816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.422 qpair failed and we were unable to recover it. 00:28:06.422 [2024-07-15 08:03:50.947682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.422 [2024-07-15 08:03:50.947752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.422 [2024-07-15 08:03:50.947767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.422 [2024-07-15 08:03:50.947774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.422 [2024-07-15 08:03:50.947780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.422 [2024-07-15 08:03:50.947794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.422 qpair failed and we were unable to recover it. 00:28:06.422 [2024-07-15 08:03:50.957722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.422 [2024-07-15 08:03:50.957778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.422 [2024-07-15 08:03:50.957792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.422 [2024-07-15 08:03:50.957798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.422 [2024-07-15 08:03:50.957804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.422 [2024-07-15 08:03:50.957818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.422 qpair failed and we were unable to recover it. 00:28:06.422 [2024-07-15 08:03:50.967769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.423 [2024-07-15 08:03:50.967826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.423 [2024-07-15 08:03:50.967840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.423 [2024-07-15 08:03:50.967847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.423 [2024-07-15 08:03:50.967852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.423 [2024-07-15 08:03:50.967867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.423 qpair failed and we were unable to recover it. 00:28:06.423 [2024-07-15 08:03:50.977751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.423 [2024-07-15 08:03:50.977819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.423 [2024-07-15 08:03:50.977833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.423 [2024-07-15 08:03:50.977840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.423 [2024-07-15 08:03:50.977851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.423 [2024-07-15 08:03:50.977866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.423 qpair failed and we were unable to recover it. 00:28:06.423 [2024-07-15 08:03:50.987792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.423 [2024-07-15 08:03:50.987853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.423 [2024-07-15 08:03:50.987867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.423 [2024-07-15 08:03:50.987874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.423 [2024-07-15 08:03:50.987880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.423 [2024-07-15 08:03:50.987894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.423 qpair failed and we were unable to recover it. 00:28:06.423 [2024-07-15 08:03:50.997773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.423 [2024-07-15 08:03:50.997830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.423 [2024-07-15 08:03:50.997845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.423 [2024-07-15 08:03:50.997852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.423 [2024-07-15 08:03:50.997857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.423 [2024-07-15 08:03:50.997872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.423 qpair failed and we were unable to recover it. 00:28:06.423 [2024-07-15 08:03:51.007803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.423 [2024-07-15 08:03:51.007852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.423 [2024-07-15 08:03:51.007866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.423 [2024-07-15 08:03:51.007873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.423 [2024-07-15 08:03:51.007879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.423 [2024-07-15 08:03:51.007892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.423 qpair failed and we were unable to recover it. 00:28:06.423 [2024-07-15 08:03:51.017760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.423 [2024-07-15 08:03:51.017813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.423 [2024-07-15 08:03:51.017827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.423 [2024-07-15 08:03:51.017833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.423 [2024-07-15 08:03:51.017839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.423 [2024-07-15 08:03:51.017853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.423 qpair failed and we were unable to recover it. 00:28:06.423 [2024-07-15 08:03:51.027860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.423 [2024-07-15 08:03:51.027920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.423 [2024-07-15 08:03:51.027934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.423 [2024-07-15 08:03:51.027940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.423 [2024-07-15 08:03:51.027946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.423 [2024-07-15 08:03:51.027960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.423 qpair failed and we were unable to recover it. 00:28:06.423 [2024-07-15 08:03:51.037887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.423 [2024-07-15 08:03:51.037948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.423 [2024-07-15 08:03:51.037962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.423 [2024-07-15 08:03:51.037968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.423 [2024-07-15 08:03:51.037974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.423 [2024-07-15 08:03:51.037988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.423 qpair failed and we were unable to recover it. 00:28:06.423 [2024-07-15 08:03:51.047925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.423 [2024-07-15 08:03:51.047979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.423 [2024-07-15 08:03:51.047995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.423 [2024-07-15 08:03:51.048001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.423 [2024-07-15 08:03:51.048007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.423 [2024-07-15 08:03:51.048021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.423 qpair failed and we were unable to recover it. 00:28:06.423 [2024-07-15 08:03:51.057954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.423 [2024-07-15 08:03:51.058005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.423 [2024-07-15 08:03:51.058019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.423 [2024-07-15 08:03:51.058025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.423 [2024-07-15 08:03:51.058031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.423 [2024-07-15 08:03:51.058045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.423 qpair failed and we were unable to recover it. 00:28:06.423 [2024-07-15 08:03:51.067960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.423 [2024-07-15 08:03:51.068018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.423 [2024-07-15 08:03:51.068032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.423 [2024-07-15 08:03:51.068042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.423 [2024-07-15 08:03:51.068048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.423 [2024-07-15 08:03:51.068062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.423 qpair failed and we were unable to recover it. 00:28:06.423 [2024-07-15 08:03:51.078011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.423 [2024-07-15 08:03:51.078069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.423 [2024-07-15 08:03:51.078083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.423 [2024-07-15 08:03:51.078090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.423 [2024-07-15 08:03:51.078095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.423 [2024-07-15 08:03:51.078109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.423 qpair failed and we were unable to recover it. 00:28:06.423 [2024-07-15 08:03:51.088079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.423 [2024-07-15 08:03:51.088144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.423 [2024-07-15 08:03:51.088158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.423 [2024-07-15 08:03:51.088164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.423 [2024-07-15 08:03:51.088170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.423 [2024-07-15 08:03:51.088184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.423 qpair failed and we were unable to recover it. 00:28:06.423 [2024-07-15 08:03:51.098037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.423 [2024-07-15 08:03:51.098093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.423 [2024-07-15 08:03:51.098107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.423 [2024-07-15 08:03:51.098114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.423 [2024-07-15 08:03:51.098119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.423 [2024-07-15 08:03:51.098134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.423 qpair failed and we were unable to recover it. 00:28:06.423 [2024-07-15 08:03:51.108093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.423 [2024-07-15 08:03:51.108154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.423 [2024-07-15 08:03:51.108169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.424 [2024-07-15 08:03:51.108176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.424 [2024-07-15 08:03:51.108181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.424 [2024-07-15 08:03:51.108195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.424 qpair failed and we were unable to recover it. 00:28:06.424 [2024-07-15 08:03:51.118088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.424 [2024-07-15 08:03:51.118145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.424 [2024-07-15 08:03:51.118160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.424 [2024-07-15 08:03:51.118166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.424 [2024-07-15 08:03:51.118172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.424 [2024-07-15 08:03:51.118186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.424 qpair failed and we were unable to recover it. 00:28:06.424 [2024-07-15 08:03:51.128132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.424 [2024-07-15 08:03:51.128188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.424 [2024-07-15 08:03:51.128203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.424 [2024-07-15 08:03:51.128209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.424 [2024-07-15 08:03:51.128215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.424 [2024-07-15 08:03:51.128232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.424 qpair failed and we were unable to recover it. 00:28:06.424 [2024-07-15 08:03:51.138166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.424 [2024-07-15 08:03:51.138221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.424 [2024-07-15 08:03:51.138238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.424 [2024-07-15 08:03:51.138245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.424 [2024-07-15 08:03:51.138250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.424 [2024-07-15 08:03:51.138264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.424 qpair failed and we were unable to recover it. 00:28:06.424 [2024-07-15 08:03:51.148208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.424 [2024-07-15 08:03:51.148286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.424 [2024-07-15 08:03:51.148301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.424 [2024-07-15 08:03:51.148308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.424 [2024-07-15 08:03:51.148314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.424 [2024-07-15 08:03:51.148327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.424 qpair failed and we were unable to recover it. 00:28:06.424 [2024-07-15 08:03:51.158229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.424 [2024-07-15 08:03:51.158312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.424 [2024-07-15 08:03:51.158326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.424 [2024-07-15 08:03:51.158335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.424 [2024-07-15 08:03:51.158341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.424 [2024-07-15 08:03:51.158355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.424 qpair failed and we were unable to recover it. 00:28:06.424 [2024-07-15 08:03:51.168184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.424 [2024-07-15 08:03:51.168266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.424 [2024-07-15 08:03:51.168281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.424 [2024-07-15 08:03:51.168287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.424 [2024-07-15 08:03:51.168293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.424 [2024-07-15 08:03:51.168307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.424 qpair failed and we were unable to recover it. 00:28:06.683 [2024-07-15 08:03:51.178333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.683 [2024-07-15 08:03:51.178392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.683 [2024-07-15 08:03:51.178407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.683 [2024-07-15 08:03:51.178415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.683 [2024-07-15 08:03:51.178422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.683 [2024-07-15 08:03:51.178438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.683 qpair failed and we were unable to recover it. 00:28:06.683 [2024-07-15 08:03:51.188330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.683 [2024-07-15 08:03:51.188390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.683 [2024-07-15 08:03:51.188404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.683 [2024-07-15 08:03:51.188411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.683 [2024-07-15 08:03:51.188417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.683 [2024-07-15 08:03:51.188431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.683 qpair failed and we were unable to recover it. 00:28:06.683 [2024-07-15 08:03:51.198347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.683 [2024-07-15 08:03:51.198407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.683 [2024-07-15 08:03:51.198422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.683 [2024-07-15 08:03:51.198429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.683 [2024-07-15 08:03:51.198435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.683 [2024-07-15 08:03:51.198449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.683 qpair failed and we were unable to recover it. 00:28:06.683 [2024-07-15 08:03:51.208379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.683 [2024-07-15 08:03:51.208436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.683 [2024-07-15 08:03:51.208450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.683 [2024-07-15 08:03:51.208457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.683 [2024-07-15 08:03:51.208463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.683 [2024-07-15 08:03:51.208477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.683 qpair failed and we were unable to recover it. 00:28:06.683 [2024-07-15 08:03:51.218429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.684 [2024-07-15 08:03:51.218481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.684 [2024-07-15 08:03:51.218495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.684 [2024-07-15 08:03:51.218502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.684 [2024-07-15 08:03:51.218508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.684 [2024-07-15 08:03:51.218522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.684 qpair failed and we were unable to recover it. 00:28:06.684 [2024-07-15 08:03:51.228441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.684 [2024-07-15 08:03:51.228497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.684 [2024-07-15 08:03:51.228511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.684 [2024-07-15 08:03:51.228518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.684 [2024-07-15 08:03:51.228524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.684 [2024-07-15 08:03:51.228537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.684 qpair failed and we were unable to recover it. 00:28:06.684 [2024-07-15 08:03:51.238458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.684 [2024-07-15 08:03:51.238516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.684 [2024-07-15 08:03:51.238529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.684 [2024-07-15 08:03:51.238536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.684 [2024-07-15 08:03:51.238542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.684 [2024-07-15 08:03:51.238556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.684 qpair failed and we were unable to recover it. 00:28:06.684 [2024-07-15 08:03:51.248537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.684 [2024-07-15 08:03:51.248589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.684 [2024-07-15 08:03:51.248609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.684 [2024-07-15 08:03:51.248616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.684 [2024-07-15 08:03:51.248622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.684 [2024-07-15 08:03:51.248636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.684 qpair failed and we were unable to recover it. 00:28:06.684 [2024-07-15 08:03:51.258529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.684 [2024-07-15 08:03:51.258620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.684 [2024-07-15 08:03:51.258634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.684 [2024-07-15 08:03:51.258640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.684 [2024-07-15 08:03:51.258646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.684 [2024-07-15 08:03:51.258661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.684 qpair failed and we were unable to recover it. 00:28:06.684 [2024-07-15 08:03:51.268562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.684 [2024-07-15 08:03:51.268620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.684 [2024-07-15 08:03:51.268635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.684 [2024-07-15 08:03:51.268642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.684 [2024-07-15 08:03:51.268648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.684 [2024-07-15 08:03:51.268663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.684 qpair failed and we were unable to recover it. 00:28:06.684 [2024-07-15 08:03:51.278581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.684 [2024-07-15 08:03:51.278633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.684 [2024-07-15 08:03:51.278647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.684 [2024-07-15 08:03:51.278653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.684 [2024-07-15 08:03:51.278659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.684 [2024-07-15 08:03:51.278673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.684 qpair failed and we were unable to recover it. 00:28:06.684 [2024-07-15 08:03:51.288645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.684 [2024-07-15 08:03:51.288734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.684 [2024-07-15 08:03:51.288747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.684 [2024-07-15 08:03:51.288753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.684 [2024-07-15 08:03:51.288759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.684 [2024-07-15 08:03:51.288776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.684 qpair failed and we were unable to recover it. 00:28:06.684 [2024-07-15 08:03:51.298624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.684 [2024-07-15 08:03:51.298688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.684 [2024-07-15 08:03:51.298703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.684 [2024-07-15 08:03:51.298709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.684 [2024-07-15 08:03:51.298715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.684 [2024-07-15 08:03:51.298729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.684 qpair failed and we were unable to recover it. 00:28:06.684 [2024-07-15 08:03:51.308673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.684 [2024-07-15 08:03:51.308730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.684 [2024-07-15 08:03:51.308744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.684 [2024-07-15 08:03:51.308751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.684 [2024-07-15 08:03:51.308757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.684 [2024-07-15 08:03:51.308771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.684 qpair failed and we were unable to recover it. 00:28:06.684 [2024-07-15 08:03:51.318706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.684 [2024-07-15 08:03:51.318760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.684 [2024-07-15 08:03:51.318774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.684 [2024-07-15 08:03:51.318781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.684 [2024-07-15 08:03:51.318787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.684 [2024-07-15 08:03:51.318801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.684 qpair failed and we were unable to recover it. 00:28:06.684 [2024-07-15 08:03:51.328729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.684 [2024-07-15 08:03:51.328777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.684 [2024-07-15 08:03:51.328791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.684 [2024-07-15 08:03:51.328798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.684 [2024-07-15 08:03:51.328803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.684 [2024-07-15 08:03:51.328817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.684 qpair failed and we were unable to recover it. 00:28:06.684 [2024-07-15 08:03:51.338685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.684 [2024-07-15 08:03:51.338745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.684 [2024-07-15 08:03:51.338762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.684 [2024-07-15 08:03:51.338769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.684 [2024-07-15 08:03:51.338774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.684 [2024-07-15 08:03:51.338788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.684 qpair failed and we were unable to recover it. 00:28:06.684 [2024-07-15 08:03:51.348798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.684 [2024-07-15 08:03:51.348856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.685 [2024-07-15 08:03:51.348871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.685 [2024-07-15 08:03:51.348877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.685 [2024-07-15 08:03:51.348883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.685 [2024-07-15 08:03:51.348897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.685 qpair failed and we were unable to recover it. 00:28:06.685 [2024-07-15 08:03:51.358822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.685 [2024-07-15 08:03:51.358872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.685 [2024-07-15 08:03:51.358886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.685 [2024-07-15 08:03:51.358892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.685 [2024-07-15 08:03:51.358898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.685 [2024-07-15 08:03:51.358911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.685 qpair failed and we were unable to recover it. 00:28:06.685 [2024-07-15 08:03:51.368845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.685 [2024-07-15 08:03:51.368897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.685 [2024-07-15 08:03:51.368912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.685 [2024-07-15 08:03:51.368918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.685 [2024-07-15 08:03:51.368924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.685 [2024-07-15 08:03:51.368937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.685 qpair failed and we were unable to recover it. 00:28:06.685 [2024-07-15 08:03:51.378926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.685 [2024-07-15 08:03:51.378983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.685 [2024-07-15 08:03:51.378997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.685 [2024-07-15 08:03:51.379003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.685 [2024-07-15 08:03:51.379012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.685 [2024-07-15 08:03:51.379026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.685 qpair failed and we were unable to recover it. 00:28:06.685 [2024-07-15 08:03:51.388902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.685 [2024-07-15 08:03:51.388956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.685 [2024-07-15 08:03:51.388971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.685 [2024-07-15 08:03:51.388978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.685 [2024-07-15 08:03:51.388984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.685 [2024-07-15 08:03:51.388998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.685 qpair failed and we were unable to recover it. 00:28:06.685 [2024-07-15 08:03:51.398909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.685 [2024-07-15 08:03:51.398968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.685 [2024-07-15 08:03:51.398982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.685 [2024-07-15 08:03:51.398989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.685 [2024-07-15 08:03:51.398995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.685 [2024-07-15 08:03:51.399009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.685 qpair failed and we were unable to recover it. 00:28:06.685 [2024-07-15 08:03:51.408968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.685 [2024-07-15 08:03:51.409035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.685 [2024-07-15 08:03:51.409049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.685 [2024-07-15 08:03:51.409055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.685 [2024-07-15 08:03:51.409061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.685 [2024-07-15 08:03:51.409075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.685 qpair failed and we were unable to recover it. 00:28:06.685 [2024-07-15 08:03:51.419012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.685 [2024-07-15 08:03:51.419089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.685 [2024-07-15 08:03:51.419103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.685 [2024-07-15 08:03:51.419110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.685 [2024-07-15 08:03:51.419115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.685 [2024-07-15 08:03:51.419130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.685 qpair failed and we were unable to recover it. 00:28:06.685 [2024-07-15 08:03:51.429028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.685 [2024-07-15 08:03:51.429089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.685 [2024-07-15 08:03:51.429103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.685 [2024-07-15 08:03:51.429109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.685 [2024-07-15 08:03:51.429115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.685 [2024-07-15 08:03:51.429129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.685 qpair failed and we were unable to recover it. 00:28:06.944 [2024-07-15 08:03:51.439045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.944 [2024-07-15 08:03:51.439144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.944 [2024-07-15 08:03:51.439158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.944 [2024-07-15 08:03:51.439165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.944 [2024-07-15 08:03:51.439171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.944 [2024-07-15 08:03:51.439185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.944 qpair failed and we were unable to recover it. 00:28:06.944 [2024-07-15 08:03:51.449076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.944 [2024-07-15 08:03:51.449129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.944 [2024-07-15 08:03:51.449144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.944 [2024-07-15 08:03:51.449151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.944 [2024-07-15 08:03:51.449157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.944 [2024-07-15 08:03:51.449171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.944 qpair failed and we were unable to recover it. 00:28:06.944 [2024-07-15 08:03:51.459094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.944 [2024-07-15 08:03:51.459145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.944 [2024-07-15 08:03:51.459159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.944 [2024-07-15 08:03:51.459166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.944 [2024-07-15 08:03:51.459171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.944 [2024-07-15 08:03:51.459185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.944 qpair failed and we were unable to recover it. 00:28:06.944 [2024-07-15 08:03:51.469136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.944 [2024-07-15 08:03:51.469190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.944 [2024-07-15 08:03:51.469205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.944 [2024-07-15 08:03:51.469212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.944 [2024-07-15 08:03:51.469221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.944 [2024-07-15 08:03:51.469239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.944 qpair failed and we were unable to recover it. 00:28:06.944 [2024-07-15 08:03:51.479162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.944 [2024-07-15 08:03:51.479222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.944 [2024-07-15 08:03:51.479240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.944 [2024-07-15 08:03:51.479247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.944 [2024-07-15 08:03:51.479253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.944 [2024-07-15 08:03:51.479267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.944 qpair failed and we were unable to recover it. 00:28:06.944 [2024-07-15 08:03:51.489198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.944 [2024-07-15 08:03:51.489257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.944 [2024-07-15 08:03:51.489272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.944 [2024-07-15 08:03:51.489278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.944 [2024-07-15 08:03:51.489284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.944 [2024-07-15 08:03:51.489299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.944 qpair failed and we were unable to recover it. 00:28:06.944 [2024-07-15 08:03:51.499226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.944 [2024-07-15 08:03:51.499282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.944 [2024-07-15 08:03:51.499297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.944 [2024-07-15 08:03:51.499304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.944 [2024-07-15 08:03:51.499309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.944 [2024-07-15 08:03:51.499324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.944 qpair failed and we were unable to recover it. 00:28:06.944 [2024-07-15 08:03:51.509251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.944 [2024-07-15 08:03:51.509309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.944 [2024-07-15 08:03:51.509324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.944 [2024-07-15 08:03:51.509330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.944 [2024-07-15 08:03:51.509336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.944 [2024-07-15 08:03:51.509350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.944 qpair failed and we were unable to recover it. 00:28:06.944 [2024-07-15 08:03:51.519277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.944 [2024-07-15 08:03:51.519327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.944 [2024-07-15 08:03:51.519341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.944 [2024-07-15 08:03:51.519347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.944 [2024-07-15 08:03:51.519353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.944 [2024-07-15 08:03:51.519367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.944 qpair failed and we were unable to recover it. 00:28:06.944 [2024-07-15 08:03:51.529321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.945 [2024-07-15 08:03:51.529403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.945 [2024-07-15 08:03:51.529417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.945 [2024-07-15 08:03:51.529423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.945 [2024-07-15 08:03:51.529429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.945 [2024-07-15 08:03:51.529442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.945 qpair failed and we were unable to recover it. 00:28:06.945 [2024-07-15 08:03:51.539340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.945 [2024-07-15 08:03:51.539390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.945 [2024-07-15 08:03:51.539404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.945 [2024-07-15 08:03:51.539410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.945 [2024-07-15 08:03:51.539415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.945 [2024-07-15 08:03:51.539429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.945 qpair failed and we were unable to recover it. 00:28:06.945 [2024-07-15 08:03:51.549363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.945 [2024-07-15 08:03:51.549420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.945 [2024-07-15 08:03:51.549435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.945 [2024-07-15 08:03:51.549441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.945 [2024-07-15 08:03:51.549447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.945 [2024-07-15 08:03:51.549461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.945 qpair failed and we were unable to recover it. 00:28:06.945 [2024-07-15 08:03:51.559395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.945 [2024-07-15 08:03:51.559462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.945 [2024-07-15 08:03:51.559476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.945 [2024-07-15 08:03:51.559486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.945 [2024-07-15 08:03:51.559491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.945 [2024-07-15 08:03:51.559505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.945 qpair failed and we were unable to recover it. 00:28:06.945 [2024-07-15 08:03:51.569411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.945 [2024-07-15 08:03:51.569465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.945 [2024-07-15 08:03:51.569480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.945 [2024-07-15 08:03:51.569487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.945 [2024-07-15 08:03:51.569492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.945 [2024-07-15 08:03:51.569506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.945 qpair failed and we were unable to recover it. 00:28:06.945 [2024-07-15 08:03:51.579446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.945 [2024-07-15 08:03:51.579543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.945 [2024-07-15 08:03:51.579557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.945 [2024-07-15 08:03:51.579563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.945 [2024-07-15 08:03:51.579569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.945 [2024-07-15 08:03:51.579583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.945 qpair failed and we were unable to recover it. 00:28:06.945 [2024-07-15 08:03:51.589433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.945 [2024-07-15 08:03:51.589490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.945 [2024-07-15 08:03:51.589504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.945 [2024-07-15 08:03:51.589511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.945 [2024-07-15 08:03:51.589518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.945 [2024-07-15 08:03:51.589532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.945 qpair failed and we were unable to recover it. 00:28:06.945 [2024-07-15 08:03:51.599496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.945 [2024-07-15 08:03:51.599554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.945 [2024-07-15 08:03:51.599569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.945 [2024-07-15 08:03:51.599576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.945 [2024-07-15 08:03:51.599582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.945 [2024-07-15 08:03:51.599597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.945 qpair failed and we were unable to recover it. 00:28:06.945 [2024-07-15 08:03:51.609541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.945 [2024-07-15 08:03:51.609594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.945 [2024-07-15 08:03:51.609609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.945 [2024-07-15 08:03:51.609616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.945 [2024-07-15 08:03:51.609622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.945 [2024-07-15 08:03:51.609636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.945 qpair failed and we were unable to recover it. 00:28:06.945 [2024-07-15 08:03:51.619583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.945 [2024-07-15 08:03:51.619635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.945 [2024-07-15 08:03:51.619648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.945 [2024-07-15 08:03:51.619654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.945 [2024-07-15 08:03:51.619660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.945 [2024-07-15 08:03:51.619674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.945 qpair failed and we were unable to recover it. 00:28:06.945 [2024-07-15 08:03:51.629623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.945 [2024-07-15 08:03:51.629678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.945 [2024-07-15 08:03:51.629691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.945 [2024-07-15 08:03:51.629698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.945 [2024-07-15 08:03:51.629704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.945 [2024-07-15 08:03:51.629718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.945 qpair failed and we were unable to recover it. 00:28:06.945 [2024-07-15 08:03:51.639624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.945 [2024-07-15 08:03:51.639679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.945 [2024-07-15 08:03:51.639693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.945 [2024-07-15 08:03:51.639700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.945 [2024-07-15 08:03:51.639706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.945 [2024-07-15 08:03:51.639720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.945 qpair failed and we were unable to recover it. 00:28:06.945 [2024-07-15 08:03:51.649652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.945 [2024-07-15 08:03:51.649725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.945 [2024-07-15 08:03:51.649743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.945 [2024-07-15 08:03:51.649750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.945 [2024-07-15 08:03:51.649756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.945 [2024-07-15 08:03:51.649770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.945 qpair failed and we were unable to recover it. 00:28:06.945 [2024-07-15 08:03:51.659660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.945 [2024-07-15 08:03:51.659716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.945 [2024-07-15 08:03:51.659731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.945 [2024-07-15 08:03:51.659737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.945 [2024-07-15 08:03:51.659743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.945 [2024-07-15 08:03:51.659757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.945 qpair failed and we were unable to recover it. 00:28:06.945 [2024-07-15 08:03:51.669707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.945 [2024-07-15 08:03:51.669765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.946 [2024-07-15 08:03:51.669781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.946 [2024-07-15 08:03:51.669788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.946 [2024-07-15 08:03:51.669794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.946 [2024-07-15 08:03:51.669808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.946 qpair failed and we were unable to recover it. 00:28:06.946 [2024-07-15 08:03:51.679701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.946 [2024-07-15 08:03:51.679758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.946 [2024-07-15 08:03:51.679772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.946 [2024-07-15 08:03:51.679779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.946 [2024-07-15 08:03:51.679784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.946 [2024-07-15 08:03:51.679798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.946 qpair failed and we were unable to recover it. 00:28:06.946 [2024-07-15 08:03:51.689742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.946 [2024-07-15 08:03:51.689800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.946 [2024-07-15 08:03:51.689814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.946 [2024-07-15 08:03:51.689821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.946 [2024-07-15 08:03:51.689827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:06.946 [2024-07-15 08:03:51.689843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.946 qpair failed and we were unable to recover it. 00:28:07.205 [2024-07-15 08:03:51.699772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.205 [2024-07-15 08:03:51.699828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.205 [2024-07-15 08:03:51.699842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.205 [2024-07-15 08:03:51.699849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.205 [2024-07-15 08:03:51.699854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.205 [2024-07-15 08:03:51.699868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.205 qpair failed and we were unable to recover it. 00:28:07.205 [2024-07-15 08:03:51.709828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.205 [2024-07-15 08:03:51.709886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.205 [2024-07-15 08:03:51.709901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.205 [2024-07-15 08:03:51.709908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.205 [2024-07-15 08:03:51.709914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.205 [2024-07-15 08:03:51.709929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.205 qpair failed and we were unable to recover it. 00:28:07.205 [2024-07-15 08:03:51.719849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.205 [2024-07-15 08:03:51.719905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.205 [2024-07-15 08:03:51.719919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.205 [2024-07-15 08:03:51.719926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.205 [2024-07-15 08:03:51.719932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.205 [2024-07-15 08:03:51.719945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.205 qpair failed and we were unable to recover it. 00:28:07.205 [2024-07-15 08:03:51.729871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.206 [2024-07-15 08:03:51.729927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.206 [2024-07-15 08:03:51.729941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.206 [2024-07-15 08:03:51.729947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.206 [2024-07-15 08:03:51.729953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.206 [2024-07-15 08:03:51.729967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.206 qpair failed and we were unable to recover it. 00:28:07.206 [2024-07-15 08:03:51.739887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.206 [2024-07-15 08:03:51.739940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.206 [2024-07-15 08:03:51.739960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.206 [2024-07-15 08:03:51.739967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.206 [2024-07-15 08:03:51.739972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.206 [2024-07-15 08:03:51.739986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.206 qpair failed and we were unable to recover it. 00:28:07.206 [2024-07-15 08:03:51.749890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.206 [2024-07-15 08:03:51.749982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.206 [2024-07-15 08:03:51.749997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.206 [2024-07-15 08:03:51.750004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.206 [2024-07-15 08:03:51.750009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.206 [2024-07-15 08:03:51.750024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.206 qpair failed and we were unable to recover it. 00:28:07.206 [2024-07-15 08:03:51.759959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.206 [2024-07-15 08:03:51.760019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.206 [2024-07-15 08:03:51.760033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.206 [2024-07-15 08:03:51.760039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.206 [2024-07-15 08:03:51.760045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.206 [2024-07-15 08:03:51.760059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.206 qpair failed and we were unable to recover it. 00:28:07.206 [2024-07-15 08:03:51.769975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.206 [2024-07-15 08:03:51.770032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.206 [2024-07-15 08:03:51.770048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.206 [2024-07-15 08:03:51.770055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.206 [2024-07-15 08:03:51.770061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.206 [2024-07-15 08:03:51.770076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.206 qpair failed and we were unable to recover it. 00:28:07.206 [2024-07-15 08:03:51.779992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.206 [2024-07-15 08:03:51.780092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.206 [2024-07-15 08:03:51.780107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.206 [2024-07-15 08:03:51.780114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.206 [2024-07-15 08:03:51.780123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.206 [2024-07-15 08:03:51.780138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.206 qpair failed and we were unable to recover it. 00:28:07.206 [2024-07-15 08:03:51.790016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.206 [2024-07-15 08:03:51.790104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.206 [2024-07-15 08:03:51.790118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.206 [2024-07-15 08:03:51.790124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.206 [2024-07-15 08:03:51.790130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.206 [2024-07-15 08:03:51.790144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.206 qpair failed and we were unable to recover it. 00:28:07.206 [2024-07-15 08:03:51.800065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.206 [2024-07-15 08:03:51.800125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.206 [2024-07-15 08:03:51.800140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.206 [2024-07-15 08:03:51.800147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.206 [2024-07-15 08:03:51.800152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.206 [2024-07-15 08:03:51.800167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.206 qpair failed and we were unable to recover it. 00:28:07.206 [2024-07-15 08:03:51.810121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.206 [2024-07-15 08:03:51.810180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.206 [2024-07-15 08:03:51.810194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.206 [2024-07-15 08:03:51.810201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.206 [2024-07-15 08:03:51.810207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.206 [2024-07-15 08:03:51.810221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.206 qpair failed and we were unable to recover it. 00:28:07.206 [2024-07-15 08:03:51.820078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.206 [2024-07-15 08:03:51.820137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.206 [2024-07-15 08:03:51.820151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.206 [2024-07-15 08:03:51.820158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.206 [2024-07-15 08:03:51.820164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.206 [2024-07-15 08:03:51.820178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.206 qpair failed and we were unable to recover it. 00:28:07.206 [2024-07-15 08:03:51.830165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.206 [2024-07-15 08:03:51.830231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.206 [2024-07-15 08:03:51.830246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.206 [2024-07-15 08:03:51.830252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.206 [2024-07-15 08:03:51.830258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.206 [2024-07-15 08:03:51.830273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.206 qpair failed and we were unable to recover it. 00:28:07.206 [2024-07-15 08:03:51.840179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.206 [2024-07-15 08:03:51.840238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.206 [2024-07-15 08:03:51.840252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.206 [2024-07-15 08:03:51.840259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.206 [2024-07-15 08:03:51.840264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.206 [2024-07-15 08:03:51.840279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.206 qpair failed and we were unable to recover it. 00:28:07.206 [2024-07-15 08:03:51.850203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.206 [2024-07-15 08:03:51.850266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.206 [2024-07-15 08:03:51.850281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.206 [2024-07-15 08:03:51.850287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.206 [2024-07-15 08:03:51.850293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.206 [2024-07-15 08:03:51.850307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.206 qpair failed and we were unable to recover it. 00:28:07.206 [2024-07-15 08:03:51.860236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.206 [2024-07-15 08:03:51.860294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.206 [2024-07-15 08:03:51.860308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.206 [2024-07-15 08:03:51.860314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.206 [2024-07-15 08:03:51.860320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.206 [2024-07-15 08:03:51.860335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.206 qpair failed and we were unable to recover it. 00:28:07.206 [2024-07-15 08:03:51.870212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.206 [2024-07-15 08:03:51.870274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.206 [2024-07-15 08:03:51.870289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.206 [2024-07-15 08:03:51.870295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.206 [2024-07-15 08:03:51.870304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.206 [2024-07-15 08:03:51.870319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.206 qpair failed and we were unable to recover it. 00:28:07.206 [2024-07-15 08:03:51.880282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.206 [2024-07-15 08:03:51.880340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.207 [2024-07-15 08:03:51.880354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.207 [2024-07-15 08:03:51.880361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.207 [2024-07-15 08:03:51.880367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.207 [2024-07-15 08:03:51.880382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.207 qpair failed and we were unable to recover it. 00:28:07.207 [2024-07-15 08:03:51.890321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.207 [2024-07-15 08:03:51.890376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.207 [2024-07-15 08:03:51.890390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.207 [2024-07-15 08:03:51.890397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.207 [2024-07-15 08:03:51.890402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.207 [2024-07-15 08:03:51.890417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.207 qpair failed and we were unable to recover it. 00:28:07.207 [2024-07-15 08:03:51.900329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.207 [2024-07-15 08:03:51.900431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.207 [2024-07-15 08:03:51.900445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.207 [2024-07-15 08:03:51.900452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.207 [2024-07-15 08:03:51.900458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.207 [2024-07-15 08:03:51.900473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.207 qpair failed and we were unable to recover it. 00:28:07.207 [2024-07-15 08:03:51.910386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.207 [2024-07-15 08:03:51.910441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.207 [2024-07-15 08:03:51.910456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.207 [2024-07-15 08:03:51.910463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.207 [2024-07-15 08:03:51.910468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.207 [2024-07-15 08:03:51.910483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.207 qpair failed and we were unable to recover it. 00:28:07.207 [2024-07-15 08:03:51.920336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.207 [2024-07-15 08:03:51.920394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.207 [2024-07-15 08:03:51.920407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.207 [2024-07-15 08:03:51.920414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.207 [2024-07-15 08:03:51.920420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.207 [2024-07-15 08:03:51.920435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.207 qpair failed and we were unable to recover it. 00:28:07.207 [2024-07-15 08:03:51.930481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.207 [2024-07-15 08:03:51.930539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.207 [2024-07-15 08:03:51.930553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.207 [2024-07-15 08:03:51.930559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.207 [2024-07-15 08:03:51.930565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.207 [2024-07-15 08:03:51.930579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.207 qpair failed and we were unable to recover it. 00:28:07.207 [2024-07-15 08:03:51.940455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.207 [2024-07-15 08:03:51.940513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.207 [2024-07-15 08:03:51.940526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.207 [2024-07-15 08:03:51.940533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.207 [2024-07-15 08:03:51.940538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.207 [2024-07-15 08:03:51.940552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.207 qpair failed and we were unable to recover it. 00:28:07.207 [2024-07-15 08:03:51.950521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.207 [2024-07-15 08:03:51.950586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.207 [2024-07-15 08:03:51.950600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.207 [2024-07-15 08:03:51.950607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.207 [2024-07-15 08:03:51.950613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.207 [2024-07-15 08:03:51.950626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.207 qpair failed and we were unable to recover it. 00:28:07.467 [2024-07-15 08:03:51.960448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.467 [2024-07-15 08:03:51.960508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.467 [2024-07-15 08:03:51.960522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.467 [2024-07-15 08:03:51.960532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.467 [2024-07-15 08:03:51.960538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.467 [2024-07-15 08:03:51.960552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.467 qpair failed and we were unable to recover it. 00:28:07.467 [2024-07-15 08:03:51.970537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.467 [2024-07-15 08:03:51.970597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.467 [2024-07-15 08:03:51.970612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.467 [2024-07-15 08:03:51.970618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.467 [2024-07-15 08:03:51.970624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.467 [2024-07-15 08:03:51.970638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.467 qpair failed and we were unable to recover it. 00:28:07.467 [2024-07-15 08:03:51.980560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.467 [2024-07-15 08:03:51.980613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.467 [2024-07-15 08:03:51.980627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.467 [2024-07-15 08:03:51.980634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.467 [2024-07-15 08:03:51.980640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.467 [2024-07-15 08:03:51.980653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.467 qpair failed and we were unable to recover it. 00:28:07.467 [2024-07-15 08:03:51.990594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.467 [2024-07-15 08:03:51.990648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.467 [2024-07-15 08:03:51.990662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.467 [2024-07-15 08:03:51.990669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.467 [2024-07-15 08:03:51.990674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.467 [2024-07-15 08:03:51.990688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.467 qpair failed and we were unable to recover it. 00:28:07.467 [2024-07-15 08:03:52.000576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.467 [2024-07-15 08:03:52.000631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.467 [2024-07-15 08:03:52.000646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.467 [2024-07-15 08:03:52.000652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.467 [2024-07-15 08:03:52.000658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.467 [2024-07-15 08:03:52.000672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.467 qpair failed and we were unable to recover it. 00:28:07.467 [2024-07-15 08:03:52.010635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.467 [2024-07-15 08:03:52.010719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.467 [2024-07-15 08:03:52.010733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.467 [2024-07-15 08:03:52.010740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.467 [2024-07-15 08:03:52.010746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.467 [2024-07-15 08:03:52.010759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.467 qpair failed and we were unable to recover it. 00:28:07.467 [2024-07-15 08:03:52.020718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.467 [2024-07-15 08:03:52.020771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.467 [2024-07-15 08:03:52.020785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.467 [2024-07-15 08:03:52.020791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.467 [2024-07-15 08:03:52.020797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.467 [2024-07-15 08:03:52.020811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.467 qpair failed and we were unable to recover it. 00:28:07.467 [2024-07-15 08:03:52.030711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.467 [2024-07-15 08:03:52.030769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.467 [2024-07-15 08:03:52.030783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.467 [2024-07-15 08:03:52.030790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.467 [2024-07-15 08:03:52.030796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.467 [2024-07-15 08:03:52.030810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.467 qpair failed and we were unable to recover it. 00:28:07.467 [2024-07-15 08:03:52.040731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.467 [2024-07-15 08:03:52.040787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.467 [2024-07-15 08:03:52.040801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.467 [2024-07-15 08:03:52.040807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.467 [2024-07-15 08:03:52.040813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.467 [2024-07-15 08:03:52.040827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.467 qpair failed and we were unable to recover it. 00:28:07.467 [2024-07-15 08:03:52.050763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.467 [2024-07-15 08:03:52.050814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.467 [2024-07-15 08:03:52.050831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.467 [2024-07-15 08:03:52.050838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.467 [2024-07-15 08:03:52.050844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.467 [2024-07-15 08:03:52.050858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.467 qpair failed and we were unable to recover it. 00:28:07.467 [2024-07-15 08:03:52.060796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.467 [2024-07-15 08:03:52.060851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.467 [2024-07-15 08:03:52.060865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.467 [2024-07-15 08:03:52.060871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.467 [2024-07-15 08:03:52.060877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.467 [2024-07-15 08:03:52.060892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.467 qpair failed and we were unable to recover it. 00:28:07.467 [2024-07-15 08:03:52.070776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.467 [2024-07-15 08:03:52.070835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.467 [2024-07-15 08:03:52.070850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.467 [2024-07-15 08:03:52.070856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.467 [2024-07-15 08:03:52.070862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.468 [2024-07-15 08:03:52.070876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.468 qpair failed and we were unable to recover it. 00:28:07.468 [2024-07-15 08:03:52.080865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.468 [2024-07-15 08:03:52.080919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.468 [2024-07-15 08:03:52.080933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.468 [2024-07-15 08:03:52.080939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.468 [2024-07-15 08:03:52.080945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.468 [2024-07-15 08:03:52.080959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.468 qpair failed and we were unable to recover it. 00:28:07.468 [2024-07-15 08:03:52.090893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.468 [2024-07-15 08:03:52.090952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.468 [2024-07-15 08:03:52.090966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.468 [2024-07-15 08:03:52.090973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.468 [2024-07-15 08:03:52.090978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.468 [2024-07-15 08:03:52.090996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.468 qpair failed and we were unable to recover it. 00:28:07.468 [2024-07-15 08:03:52.100861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.468 [2024-07-15 08:03:52.100940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.468 [2024-07-15 08:03:52.100954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.468 [2024-07-15 08:03:52.100960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.468 [2024-07-15 08:03:52.100966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.468 [2024-07-15 08:03:52.100981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.468 qpair failed and we were unable to recover it. 00:28:07.468 [2024-07-15 08:03:52.110884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.468 [2024-07-15 08:03:52.110943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.468 [2024-07-15 08:03:52.110957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.468 [2024-07-15 08:03:52.110963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.468 [2024-07-15 08:03:52.110969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.468 [2024-07-15 08:03:52.110983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.468 qpair failed and we were unable to recover it. 00:28:07.468 [2024-07-15 08:03:52.120898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.468 [2024-07-15 08:03:52.120953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.468 [2024-07-15 08:03:52.120967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.468 [2024-07-15 08:03:52.120973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.468 [2024-07-15 08:03:52.120978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.468 [2024-07-15 08:03:52.120992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.468 qpair failed and we were unable to recover it. 00:28:07.468 [2024-07-15 08:03:52.131001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.468 [2024-07-15 08:03:52.131062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.468 [2024-07-15 08:03:52.131076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.468 [2024-07-15 08:03:52.131083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.468 [2024-07-15 08:03:52.131088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.468 [2024-07-15 08:03:52.131102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.468 qpair failed and we were unable to recover it. 00:28:07.468 [2024-07-15 08:03:52.141028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.468 [2024-07-15 08:03:52.141082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.468 [2024-07-15 08:03:52.141100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.468 [2024-07-15 08:03:52.141107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.468 [2024-07-15 08:03:52.141112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.468 [2024-07-15 08:03:52.141126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.468 qpair failed and we were unable to recover it. 00:28:07.468 [2024-07-15 08:03:52.151107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.468 [2024-07-15 08:03:52.151189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.468 [2024-07-15 08:03:52.151204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.468 [2024-07-15 08:03:52.151210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.468 [2024-07-15 08:03:52.151216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.468 [2024-07-15 08:03:52.151236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.468 qpair failed and we were unable to recover it. 00:28:07.468 [2024-07-15 08:03:52.161095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.468 [2024-07-15 08:03:52.161154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.468 [2024-07-15 08:03:52.161167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.468 [2024-07-15 08:03:52.161174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.468 [2024-07-15 08:03:52.161180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.468 [2024-07-15 08:03:52.161194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.468 qpair failed and we were unable to recover it. 00:28:07.468 [2024-07-15 08:03:52.171128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.468 [2024-07-15 08:03:52.171186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.468 [2024-07-15 08:03:52.171200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.468 [2024-07-15 08:03:52.171207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.468 [2024-07-15 08:03:52.171213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.468 [2024-07-15 08:03:52.171231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.468 qpair failed and we were unable to recover it. 00:28:07.468 [2024-07-15 08:03:52.181180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.468 [2024-07-15 08:03:52.181238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.468 [2024-07-15 08:03:52.181253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.468 [2024-07-15 08:03:52.181260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.468 [2024-07-15 08:03:52.181265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.468 [2024-07-15 08:03:52.181282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.468 qpair failed and we were unable to recover it. 00:28:07.468 [2024-07-15 08:03:52.191236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.468 [2024-07-15 08:03:52.191294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.468 [2024-07-15 08:03:52.191307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.468 [2024-07-15 08:03:52.191314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.468 [2024-07-15 08:03:52.191320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.468 [2024-07-15 08:03:52.191334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.468 qpair failed and we were unable to recover it. 00:28:07.468 [2024-07-15 08:03:52.201228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.468 [2024-07-15 08:03:52.201283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.468 [2024-07-15 08:03:52.201298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.468 [2024-07-15 08:03:52.201305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.468 [2024-07-15 08:03:52.201310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.468 [2024-07-15 08:03:52.201325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.468 qpair failed and we were unable to recover it. 00:28:07.468 [2024-07-15 08:03:52.211275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.468 [2024-07-15 08:03:52.211331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.468 [2024-07-15 08:03:52.211345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.468 [2024-07-15 08:03:52.211352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.468 [2024-07-15 08:03:52.211358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.468 [2024-07-15 08:03:52.211372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.468 qpair failed and we were unable to recover it. 00:28:07.732 [2024-07-15 08:03:52.221308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.732 [2024-07-15 08:03:52.221363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.732 [2024-07-15 08:03:52.221377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.732 [2024-07-15 08:03:52.221384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.732 [2024-07-15 08:03:52.221389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.732 [2024-07-15 08:03:52.221404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-07-15 08:03:52.231310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.732 [2024-07-15 08:03:52.231368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.732 [2024-07-15 08:03:52.231383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.732 [2024-07-15 08:03:52.231389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.732 [2024-07-15 08:03:52.231395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.732 [2024-07-15 08:03:52.231409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-07-15 08:03:52.241323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.732 [2024-07-15 08:03:52.241378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.732 [2024-07-15 08:03:52.241391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.732 [2024-07-15 08:03:52.241398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.732 [2024-07-15 08:03:52.241404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.732 [2024-07-15 08:03:52.241418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-07-15 08:03:52.251298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.732 [2024-07-15 08:03:52.251350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.732 [2024-07-15 08:03:52.251364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.732 [2024-07-15 08:03:52.251371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.732 [2024-07-15 08:03:52.251377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.732 [2024-07-15 08:03:52.251391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-07-15 08:03:52.261413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.732 [2024-07-15 08:03:52.261470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.732 [2024-07-15 08:03:52.261484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.732 [2024-07-15 08:03:52.261490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.732 [2024-07-15 08:03:52.261496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.732 [2024-07-15 08:03:52.261510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-07-15 08:03:52.271425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.732 [2024-07-15 08:03:52.271492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.732 [2024-07-15 08:03:52.271507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.732 [2024-07-15 08:03:52.271514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.732 [2024-07-15 08:03:52.271523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.732 [2024-07-15 08:03:52.271537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-07-15 08:03:52.281448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.732 [2024-07-15 08:03:52.281551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.732 [2024-07-15 08:03:52.281565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.732 [2024-07-15 08:03:52.281572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.732 [2024-07-15 08:03:52.281578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.732 [2024-07-15 08:03:52.281592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-07-15 08:03:52.291466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.732 [2024-07-15 08:03:52.291519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.732 [2024-07-15 08:03:52.291532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.732 [2024-07-15 08:03:52.291539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.732 [2024-07-15 08:03:52.291545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.732 [2024-07-15 08:03:52.291559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-07-15 08:03:52.301506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.732 [2024-07-15 08:03:52.301556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.732 [2024-07-15 08:03:52.301571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.732 [2024-07-15 08:03:52.301577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.732 [2024-07-15 08:03:52.301583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.732 [2024-07-15 08:03:52.301597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-07-15 08:03:52.311520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.732 [2024-07-15 08:03:52.311573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.732 [2024-07-15 08:03:52.311587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.732 [2024-07-15 08:03:52.311593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.732 [2024-07-15 08:03:52.311599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.732 [2024-07-15 08:03:52.311613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-07-15 08:03:52.321557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.732 [2024-07-15 08:03:52.321653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.732 [2024-07-15 08:03:52.321667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.732 [2024-07-15 08:03:52.321673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.732 [2024-07-15 08:03:52.321679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.732 [2024-07-15 08:03:52.321693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-07-15 08:03:52.331579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.732 [2024-07-15 08:03:52.331631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.732 [2024-07-15 08:03:52.331645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.732 [2024-07-15 08:03:52.331652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.732 [2024-07-15 08:03:52.331657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.732 [2024-07-15 08:03:52.331671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.732 [2024-07-15 08:03:52.341615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.732 [2024-07-15 08:03:52.341672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.732 [2024-07-15 08:03:52.341685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.732 [2024-07-15 08:03:52.341692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.732 [2024-07-15 08:03:52.341697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.732 [2024-07-15 08:03:52.341711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.732 qpair failed and we were unable to recover it. 00:28:07.733 [2024-07-15 08:03:52.351638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.733 [2024-07-15 08:03:52.351694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.733 [2024-07-15 08:03:52.351709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.733 [2024-07-15 08:03:52.351715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.733 [2024-07-15 08:03:52.351721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.733 [2024-07-15 08:03:52.351735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-07-15 08:03:52.361678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.733 [2024-07-15 08:03:52.361734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.733 [2024-07-15 08:03:52.361748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.733 [2024-07-15 08:03:52.361758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.733 [2024-07-15 08:03:52.361764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.733 [2024-07-15 08:03:52.361778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-07-15 08:03:52.371709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.733 [2024-07-15 08:03:52.371779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.733 [2024-07-15 08:03:52.371793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.733 [2024-07-15 08:03:52.371799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.733 [2024-07-15 08:03:52.371805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.733 [2024-07-15 08:03:52.371819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-07-15 08:03:52.381718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.733 [2024-07-15 08:03:52.381783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.733 [2024-07-15 08:03:52.381797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.733 [2024-07-15 08:03:52.381804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.733 [2024-07-15 08:03:52.381810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.733 [2024-07-15 08:03:52.381823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-07-15 08:03:52.391729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.733 [2024-07-15 08:03:52.391785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.733 [2024-07-15 08:03:52.391799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.733 [2024-07-15 08:03:52.391806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.733 [2024-07-15 08:03:52.391811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.733 [2024-07-15 08:03:52.391825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-07-15 08:03:52.401756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.733 [2024-07-15 08:03:52.401813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.733 [2024-07-15 08:03:52.401827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.733 [2024-07-15 08:03:52.401834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.733 [2024-07-15 08:03:52.401840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.733 [2024-07-15 08:03:52.401854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-07-15 08:03:52.411790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.733 [2024-07-15 08:03:52.411842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.733 [2024-07-15 08:03:52.411856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.733 [2024-07-15 08:03:52.411862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.733 [2024-07-15 08:03:52.411868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.733 [2024-07-15 08:03:52.411882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-07-15 08:03:52.421862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.733 [2024-07-15 08:03:52.421910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.733 [2024-07-15 08:03:52.421924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.733 [2024-07-15 08:03:52.421930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.733 [2024-07-15 08:03:52.421936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.733 [2024-07-15 08:03:52.421950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-07-15 08:03:52.431787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.733 [2024-07-15 08:03:52.431842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.733 [2024-07-15 08:03:52.431856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.733 [2024-07-15 08:03:52.431863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.733 [2024-07-15 08:03:52.431868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.733 [2024-07-15 08:03:52.431882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-07-15 08:03:52.441870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.733 [2024-07-15 08:03:52.441921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.733 [2024-07-15 08:03:52.441935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.733 [2024-07-15 08:03:52.441941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.733 [2024-07-15 08:03:52.441947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.733 [2024-07-15 08:03:52.441960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-07-15 08:03:52.451901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.733 [2024-07-15 08:03:52.451955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.733 [2024-07-15 08:03:52.451969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.733 [2024-07-15 08:03:52.451978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.733 [2024-07-15 08:03:52.451984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.733 [2024-07-15 08:03:52.451998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-07-15 08:03:52.461995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.733 [2024-07-15 08:03:52.462053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.733 [2024-07-15 08:03:52.462066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.733 [2024-07-15 08:03:52.462073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.733 [2024-07-15 08:03:52.462078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.733 [2024-07-15 08:03:52.462092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-07-15 08:03:52.471972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.733 [2024-07-15 08:03:52.472029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.733 [2024-07-15 08:03:52.472044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.733 [2024-07-15 08:03:52.472051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.733 [2024-07-15 08:03:52.472057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.733 [2024-07-15 08:03:52.472071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.733 [2024-07-15 08:03:52.482008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.733 [2024-07-15 08:03:52.482061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.733 [2024-07-15 08:03:52.482076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.733 [2024-07-15 08:03:52.482083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.733 [2024-07-15 08:03:52.482088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.733 [2024-07-15 08:03:52.482103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.733 qpair failed and we were unable to recover it. 00:28:07.993 [2024-07-15 08:03:52.492076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.993 [2024-07-15 08:03:52.492138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.993 [2024-07-15 08:03:52.492153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.993 [2024-07-15 08:03:52.492160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.993 [2024-07-15 08:03:52.492166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.993 [2024-07-15 08:03:52.492181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.993 qpair failed and we were unable to recover it. 00:28:07.993 [2024-07-15 08:03:52.502062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.993 [2024-07-15 08:03:52.502120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.993 [2024-07-15 08:03:52.502135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.993 [2024-07-15 08:03:52.502141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.993 [2024-07-15 08:03:52.502147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.993 [2024-07-15 08:03:52.502161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.993 qpair failed and we were unable to recover it. 00:28:07.993 [2024-07-15 08:03:52.512090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.993 [2024-07-15 08:03:52.512145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.993 [2024-07-15 08:03:52.512159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.993 [2024-07-15 08:03:52.512167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.993 [2024-07-15 08:03:52.512172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.993 [2024-07-15 08:03:52.512187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.993 qpair failed and we were unable to recover it. 00:28:07.993 [2024-07-15 08:03:52.522160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.993 [2024-07-15 08:03:52.522210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.994 [2024-07-15 08:03:52.522223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.994 [2024-07-15 08:03:52.522234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.994 [2024-07-15 08:03:52.522240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.994 [2024-07-15 08:03:52.522254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.994 qpair failed and we were unable to recover it. 00:28:07.994 [2024-07-15 08:03:52.532152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.994 [2024-07-15 08:03:52.532208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.994 [2024-07-15 08:03:52.532223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.994 [2024-07-15 08:03:52.532233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.994 [2024-07-15 08:03:52.532239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.994 [2024-07-15 08:03:52.532253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.994 qpair failed and we were unable to recover it. 00:28:07.994 [2024-07-15 08:03:52.542173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.994 [2024-07-15 08:03:52.542228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.994 [2024-07-15 08:03:52.542247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.994 [2024-07-15 08:03:52.542254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.994 [2024-07-15 08:03:52.542259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.994 [2024-07-15 08:03:52.542274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.994 qpair failed and we were unable to recover it. 00:28:07.994 [2024-07-15 08:03:52.552217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.994 [2024-07-15 08:03:52.552287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.994 [2024-07-15 08:03:52.552302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.994 [2024-07-15 08:03:52.552309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.994 [2024-07-15 08:03:52.552314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.994 [2024-07-15 08:03:52.552328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.994 qpair failed and we were unable to recover it. 00:28:07.994 [2024-07-15 08:03:52.562220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.994 [2024-07-15 08:03:52.562280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.994 [2024-07-15 08:03:52.562294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.994 [2024-07-15 08:03:52.562301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.994 [2024-07-15 08:03:52.562306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.994 [2024-07-15 08:03:52.562320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.994 qpair failed and we were unable to recover it. 00:28:07.994 [2024-07-15 08:03:52.572294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.994 [2024-07-15 08:03:52.572353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.994 [2024-07-15 08:03:52.572368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.994 [2024-07-15 08:03:52.572374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.994 [2024-07-15 08:03:52.572380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.994 [2024-07-15 08:03:52.572394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.994 qpair failed and we were unable to recover it. 00:28:07.994 [2024-07-15 08:03:52.582267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.994 [2024-07-15 08:03:52.582347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.994 [2024-07-15 08:03:52.582361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.994 [2024-07-15 08:03:52.582367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.994 [2024-07-15 08:03:52.582373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.994 [2024-07-15 08:03:52.582391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.994 qpair failed and we were unable to recover it. 00:28:07.994 [2024-07-15 08:03:52.592326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.994 [2024-07-15 08:03:52.592384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.994 [2024-07-15 08:03:52.592399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.994 [2024-07-15 08:03:52.592406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.994 [2024-07-15 08:03:52.592411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.994 [2024-07-15 08:03:52.592425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.994 qpair failed and we were unable to recover it. 00:28:07.994 [2024-07-15 08:03:52.602338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.994 [2024-07-15 08:03:52.602398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.994 [2024-07-15 08:03:52.602412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.994 [2024-07-15 08:03:52.602418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.994 [2024-07-15 08:03:52.602424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.994 [2024-07-15 08:03:52.602438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.994 qpair failed and we were unable to recover it. 00:28:07.994 [2024-07-15 08:03:52.612392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.994 [2024-07-15 08:03:52.612445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.994 [2024-07-15 08:03:52.612459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.994 [2024-07-15 08:03:52.612466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.994 [2024-07-15 08:03:52.612471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.994 [2024-07-15 08:03:52.612485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.994 qpair failed and we were unable to recover it. 00:28:07.994 [2024-07-15 08:03:52.622454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.994 [2024-07-15 08:03:52.622511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.994 [2024-07-15 08:03:52.622525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.994 [2024-07-15 08:03:52.622532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.994 [2024-07-15 08:03:52.622538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.994 [2024-07-15 08:03:52.622552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.994 qpair failed and we were unable to recover it. 00:28:07.994 [2024-07-15 08:03:52.632441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.994 [2024-07-15 08:03:52.632513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.994 [2024-07-15 08:03:52.632530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.994 [2024-07-15 08:03:52.632537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.994 [2024-07-15 08:03:52.632542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.994 [2024-07-15 08:03:52.632556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.994 qpair failed and we were unable to recover it. 00:28:07.994 [2024-07-15 08:03:52.642462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.994 [2024-07-15 08:03:52.642518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.994 [2024-07-15 08:03:52.642532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.994 [2024-07-15 08:03:52.642539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.994 [2024-07-15 08:03:52.642545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.994 [2024-07-15 08:03:52.642559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.994 qpair failed and we were unable to recover it. 00:28:07.994 [2024-07-15 08:03:52.652465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.994 [2024-07-15 08:03:52.652521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.994 [2024-07-15 08:03:52.652536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.994 [2024-07-15 08:03:52.652542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.994 [2024-07-15 08:03:52.652548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.994 [2024-07-15 08:03:52.652562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.994 qpair failed and we were unable to recover it. 00:28:07.994 [2024-07-15 08:03:52.662518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.994 [2024-07-15 08:03:52.662577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.995 [2024-07-15 08:03:52.662592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.995 [2024-07-15 08:03:52.662599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.995 [2024-07-15 08:03:52.662605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.995 [2024-07-15 08:03:52.662619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.995 qpair failed and we were unable to recover it. 00:28:07.995 [2024-07-15 08:03:52.672549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.995 [2024-07-15 08:03:52.672607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.995 [2024-07-15 08:03:52.672623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.995 [2024-07-15 08:03:52.672630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.995 [2024-07-15 08:03:52.672639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.995 [2024-07-15 08:03:52.672654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.995 qpair failed and we were unable to recover it. 00:28:07.995 [2024-07-15 08:03:52.682591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.995 [2024-07-15 08:03:52.682643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.995 [2024-07-15 08:03:52.682658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.995 [2024-07-15 08:03:52.682665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.995 [2024-07-15 08:03:52.682671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.995 [2024-07-15 08:03:52.682686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.995 qpair failed and we were unable to recover it. 00:28:07.995 [2024-07-15 08:03:52.692549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.995 [2024-07-15 08:03:52.692635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.995 [2024-07-15 08:03:52.692650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.995 [2024-07-15 08:03:52.692657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.995 [2024-07-15 08:03:52.692663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.995 [2024-07-15 08:03:52.692677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.995 qpair failed and we were unable to recover it. 00:28:07.995 [2024-07-15 08:03:52.702630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.995 [2024-07-15 08:03:52.702711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.995 [2024-07-15 08:03:52.702725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.995 [2024-07-15 08:03:52.702731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.995 [2024-07-15 08:03:52.702736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.995 [2024-07-15 08:03:52.702750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.995 qpair failed and we were unable to recover it. 00:28:07.995 [2024-07-15 08:03:52.712669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.995 [2024-07-15 08:03:52.712728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.995 [2024-07-15 08:03:52.712743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.995 [2024-07-15 08:03:52.712749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.995 [2024-07-15 08:03:52.712755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.995 [2024-07-15 08:03:52.712770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.995 qpair failed and we were unable to recover it. 00:28:07.995 [2024-07-15 08:03:52.722695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.995 [2024-07-15 08:03:52.722756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.995 [2024-07-15 08:03:52.722770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.995 [2024-07-15 08:03:52.722776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.995 [2024-07-15 08:03:52.722782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.995 [2024-07-15 08:03:52.722795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.995 qpair failed and we were unable to recover it. 00:28:07.995 [2024-07-15 08:03:52.732742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.995 [2024-07-15 08:03:52.732831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.995 [2024-07-15 08:03:52.732844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.995 [2024-07-15 08:03:52.732851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.995 [2024-07-15 08:03:52.732857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.995 [2024-07-15 08:03:52.732871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.995 qpair failed and we were unable to recover it. 00:28:07.995 [2024-07-15 08:03:52.742797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.995 [2024-07-15 08:03:52.742852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.995 [2024-07-15 08:03:52.742866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.995 [2024-07-15 08:03:52.742873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.995 [2024-07-15 08:03:52.742879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:07.995 [2024-07-15 08:03:52.742893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.995 qpair failed and we were unable to recover it. 00:28:08.255 [2024-07-15 08:03:52.752828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.255 [2024-07-15 08:03:52.752884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.255 [2024-07-15 08:03:52.752898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.255 [2024-07-15 08:03:52.752905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.255 [2024-07-15 08:03:52.752912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.255 [2024-07-15 08:03:52.752926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.255 qpair failed and we were unable to recover it. 00:28:08.255 [2024-07-15 08:03:52.762826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.255 [2024-07-15 08:03:52.762882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.255 [2024-07-15 08:03:52.762897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.255 [2024-07-15 08:03:52.762906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.255 [2024-07-15 08:03:52.762912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.255 [2024-07-15 08:03:52.762926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.255 qpair failed and we were unable to recover it. 00:28:08.255 [2024-07-15 08:03:52.772836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.255 [2024-07-15 08:03:52.772891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.255 [2024-07-15 08:03:52.772905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.255 [2024-07-15 08:03:52.772911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.255 [2024-07-15 08:03:52.772917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.255 [2024-07-15 08:03:52.772931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.255 qpair failed and we were unable to recover it. 00:28:08.255 [2024-07-15 08:03:52.782854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.255 [2024-07-15 08:03:52.782910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.255 [2024-07-15 08:03:52.782924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.255 [2024-07-15 08:03:52.782930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.255 [2024-07-15 08:03:52.782935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.255 [2024-07-15 08:03:52.782950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.255 qpair failed and we were unable to recover it. 00:28:08.255 [2024-07-15 08:03:52.792931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.255 [2024-07-15 08:03:52.793011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.255 [2024-07-15 08:03:52.793026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.255 [2024-07-15 08:03:52.793033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.255 [2024-07-15 08:03:52.793038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.255 [2024-07-15 08:03:52.793052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.255 qpair failed and we were unable to recover it. 00:28:08.255 [2024-07-15 08:03:52.802934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.255 [2024-07-15 08:03:52.802991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.255 [2024-07-15 08:03:52.803005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.255 [2024-07-15 08:03:52.803012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.255 [2024-07-15 08:03:52.803017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.255 [2024-07-15 08:03:52.803031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.255 qpair failed and we were unable to recover it. 00:28:08.255 [2024-07-15 08:03:52.812946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.255 [2024-07-15 08:03:52.813048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.255 [2024-07-15 08:03:52.813062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.255 [2024-07-15 08:03:52.813069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.255 [2024-07-15 08:03:52.813075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.255 [2024-07-15 08:03:52.813089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.255 qpair failed and we were unable to recover it. 00:28:08.255 [2024-07-15 08:03:52.822961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.255 [2024-07-15 08:03:52.823012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.255 [2024-07-15 08:03:52.823025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.255 [2024-07-15 08:03:52.823031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.255 [2024-07-15 08:03:52.823037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.255 [2024-07-15 08:03:52.823051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.255 qpair failed and we were unable to recover it. 00:28:08.255 [2024-07-15 08:03:52.833007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.255 [2024-07-15 08:03:52.833068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.255 [2024-07-15 08:03:52.833082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.255 [2024-07-15 08:03:52.833088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.255 [2024-07-15 08:03:52.833094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.255 [2024-07-15 08:03:52.833108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.255 qpair failed and we were unable to recover it. 00:28:08.255 [2024-07-15 08:03:52.843033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.255 [2024-07-15 08:03:52.843086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.255 [2024-07-15 08:03:52.843099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.255 [2024-07-15 08:03:52.843106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.255 [2024-07-15 08:03:52.843111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.255 [2024-07-15 08:03:52.843125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.255 qpair failed and we were unable to recover it. 00:28:08.255 [2024-07-15 08:03:52.853059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.255 [2024-07-15 08:03:52.853114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.255 [2024-07-15 08:03:52.853128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.255 [2024-07-15 08:03:52.853137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.255 [2024-07-15 08:03:52.853142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.255 [2024-07-15 08:03:52.853156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.255 qpair failed and we were unable to recover it. 00:28:08.255 [2024-07-15 08:03:52.863115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.255 [2024-07-15 08:03:52.863170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.255 [2024-07-15 08:03:52.863184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.255 [2024-07-15 08:03:52.863191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.255 [2024-07-15 08:03:52.863196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.255 [2024-07-15 08:03:52.863211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.255 qpair failed and we were unable to recover it. 00:28:08.255 [2024-07-15 08:03:52.873119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.255 [2024-07-15 08:03:52.873175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.255 [2024-07-15 08:03:52.873190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.256 [2024-07-15 08:03:52.873196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.256 [2024-07-15 08:03:52.873202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.256 [2024-07-15 08:03:52.873216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.256 qpair failed and we were unable to recover it. 00:28:08.256 [2024-07-15 08:03:52.883139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.256 [2024-07-15 08:03:52.883195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.256 [2024-07-15 08:03:52.883209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.256 [2024-07-15 08:03:52.883215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.256 [2024-07-15 08:03:52.883221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.256 [2024-07-15 08:03:52.883238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.256 qpair failed and we were unable to recover it. 00:28:08.256 [2024-07-15 08:03:52.893162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.256 [2024-07-15 08:03:52.893216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.256 [2024-07-15 08:03:52.893235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.256 [2024-07-15 08:03:52.893241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.256 [2024-07-15 08:03:52.893247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.256 [2024-07-15 08:03:52.893261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.256 qpair failed and we were unable to recover it. 00:28:08.256 [2024-07-15 08:03:52.903189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.256 [2024-07-15 08:03:52.903248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.256 [2024-07-15 08:03:52.903263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.256 [2024-07-15 08:03:52.903269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.256 [2024-07-15 08:03:52.903275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.256 [2024-07-15 08:03:52.903289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.256 qpair failed and we were unable to recover it. 00:28:08.256 [2024-07-15 08:03:52.913310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.256 [2024-07-15 08:03:52.913373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.256 [2024-07-15 08:03:52.913387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.256 [2024-07-15 08:03:52.913393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.256 [2024-07-15 08:03:52.913399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.256 [2024-07-15 08:03:52.913413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.256 qpair failed and we were unable to recover it. 00:28:08.256 [2024-07-15 08:03:52.923272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.256 [2024-07-15 08:03:52.923328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.256 [2024-07-15 08:03:52.923342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.256 [2024-07-15 08:03:52.923348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.256 [2024-07-15 08:03:52.923354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.256 [2024-07-15 08:03:52.923368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.256 qpair failed and we were unable to recover it. 00:28:08.256 [2024-07-15 08:03:52.933315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.256 [2024-07-15 08:03:52.933401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.256 [2024-07-15 08:03:52.933415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.256 [2024-07-15 08:03:52.933421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.256 [2024-07-15 08:03:52.933426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.256 [2024-07-15 08:03:52.933440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.256 qpair failed and we were unable to recover it. 00:28:08.256 [2024-07-15 08:03:52.943336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.256 [2024-07-15 08:03:52.943396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.256 [2024-07-15 08:03:52.943414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.256 [2024-07-15 08:03:52.943420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.256 [2024-07-15 08:03:52.943426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.256 [2024-07-15 08:03:52.943440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.256 qpair failed and we were unable to recover it. 00:28:08.256 [2024-07-15 08:03:52.953336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.256 [2024-07-15 08:03:52.953393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.256 [2024-07-15 08:03:52.953408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.256 [2024-07-15 08:03:52.953414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.256 [2024-07-15 08:03:52.953420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.256 [2024-07-15 08:03:52.953434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.256 qpair failed and we were unable to recover it. 00:28:08.256 [2024-07-15 08:03:52.963327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.256 [2024-07-15 08:03:52.963386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.256 [2024-07-15 08:03:52.963399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.256 [2024-07-15 08:03:52.963406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.256 [2024-07-15 08:03:52.963412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.256 [2024-07-15 08:03:52.963426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.256 qpair failed and we were unable to recover it. 00:28:08.256 [2024-07-15 08:03:52.973394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.256 [2024-07-15 08:03:52.973463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.256 [2024-07-15 08:03:52.973477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.256 [2024-07-15 08:03:52.973483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.256 [2024-07-15 08:03:52.973489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.256 [2024-07-15 08:03:52.973503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.256 qpair failed and we were unable to recover it. 00:28:08.256 [2024-07-15 08:03:52.983412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.256 [2024-07-15 08:03:52.983467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.256 [2024-07-15 08:03:52.983481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.256 [2024-07-15 08:03:52.983487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.256 [2024-07-15 08:03:52.983493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.256 [2024-07-15 08:03:52.983511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.256 qpair failed and we were unable to recover it. 00:28:08.256 [2024-07-15 08:03:52.993442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.256 [2024-07-15 08:03:52.993501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.256 [2024-07-15 08:03:52.993515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.256 [2024-07-15 08:03:52.993522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.256 [2024-07-15 08:03:52.993527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.256 [2024-07-15 08:03:52.993541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.256 qpair failed and we were unable to recover it. 00:28:08.256 [2024-07-15 08:03:53.003459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.256 [2024-07-15 08:03:53.003518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.256 [2024-07-15 08:03:53.003533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.256 [2024-07-15 08:03:53.003540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.256 [2024-07-15 08:03:53.003546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.256 [2024-07-15 08:03:53.003561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.256 qpair failed and we were unable to recover it. 00:28:08.516 [2024-07-15 08:03:53.013458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.516 [2024-07-15 08:03:53.013509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.516 [2024-07-15 08:03:53.013524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.516 [2024-07-15 08:03:53.013531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.516 [2024-07-15 08:03:53.013536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.516 [2024-07-15 08:03:53.013550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.516 qpair failed and we were unable to recover it. 00:28:08.516 [2024-07-15 08:03:53.023545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.516 [2024-07-15 08:03:53.023608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.516 [2024-07-15 08:03:53.023622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.516 [2024-07-15 08:03:53.023628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.516 [2024-07-15 08:03:53.023634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.516 [2024-07-15 08:03:53.023648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.516 qpair failed and we were unable to recover it. 00:28:08.516 [2024-07-15 08:03:53.033540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.516 [2024-07-15 08:03:53.033597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.516 [2024-07-15 08:03:53.033615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.516 [2024-07-15 08:03:53.033622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.516 [2024-07-15 08:03:53.033628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.516 [2024-07-15 08:03:53.033642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.516 qpair failed and we were unable to recover it. 00:28:08.516 [2024-07-15 08:03:53.043567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.516 [2024-07-15 08:03:53.043620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.516 [2024-07-15 08:03:53.043634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.516 [2024-07-15 08:03:53.043641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.516 [2024-07-15 08:03:53.043647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.516 [2024-07-15 08:03:53.043661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.516 qpair failed and we were unable to recover it. 00:28:08.516 [2024-07-15 08:03:53.053589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.516 [2024-07-15 08:03:53.053674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.516 [2024-07-15 08:03:53.053688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.516 [2024-07-15 08:03:53.053694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.516 [2024-07-15 08:03:53.053700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.516 [2024-07-15 08:03:53.053715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.516 qpair failed and we were unable to recover it. 00:28:08.516 [2024-07-15 08:03:53.063618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.516 [2024-07-15 08:03:53.063670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.516 [2024-07-15 08:03:53.063684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.516 [2024-07-15 08:03:53.063690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.516 [2024-07-15 08:03:53.063696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.516 [2024-07-15 08:03:53.063710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.516 qpair failed and we were unable to recover it. 00:28:08.516 [2024-07-15 08:03:53.073650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.516 [2024-07-15 08:03:53.073709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.516 [2024-07-15 08:03:53.073723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.516 [2024-07-15 08:03:53.073730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.516 [2024-07-15 08:03:53.073738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.516 [2024-07-15 08:03:53.073753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.516 qpair failed and we were unable to recover it. 00:28:08.516 [2024-07-15 08:03:53.083676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.516 [2024-07-15 08:03:53.083730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.516 [2024-07-15 08:03:53.083744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.516 [2024-07-15 08:03:53.083750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.516 [2024-07-15 08:03:53.083755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.516 [2024-07-15 08:03:53.083770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.516 qpair failed and we were unable to recover it. 00:28:08.516 [2024-07-15 08:03:53.093754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.516 [2024-07-15 08:03:53.093813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.516 [2024-07-15 08:03:53.093828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.516 [2024-07-15 08:03:53.093834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.517 [2024-07-15 08:03:53.093840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.517 [2024-07-15 08:03:53.093854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.517 qpair failed and we were unable to recover it. 00:28:08.517 [2024-07-15 08:03:53.103741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.517 [2024-07-15 08:03:53.103795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.517 [2024-07-15 08:03:53.103808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.517 [2024-07-15 08:03:53.103815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.517 [2024-07-15 08:03:53.103821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.517 [2024-07-15 08:03:53.103835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.517 qpair failed and we were unable to recover it. 00:28:08.517 [2024-07-15 08:03:53.113774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.517 [2024-07-15 08:03:53.113827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.517 [2024-07-15 08:03:53.113841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.517 [2024-07-15 08:03:53.113848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.517 [2024-07-15 08:03:53.113854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.517 [2024-07-15 08:03:53.113867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.517 qpair failed and we were unable to recover it. 00:28:08.517 [2024-07-15 08:03:53.123841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.517 [2024-07-15 08:03:53.123898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.517 [2024-07-15 08:03:53.123913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.517 [2024-07-15 08:03:53.123919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.517 [2024-07-15 08:03:53.123925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.517 [2024-07-15 08:03:53.123938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.517 qpair failed and we were unable to recover it. 00:28:08.517 [2024-07-15 08:03:53.133768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.517 [2024-07-15 08:03:53.133828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.517 [2024-07-15 08:03:53.133842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.517 [2024-07-15 08:03:53.133849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.517 [2024-07-15 08:03:53.133855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.517 [2024-07-15 08:03:53.133868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.517 qpair failed and we were unable to recover it. 00:28:08.517 [2024-07-15 08:03:53.143868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.517 [2024-07-15 08:03:53.143930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.517 [2024-07-15 08:03:53.143945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.517 [2024-07-15 08:03:53.143951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.517 [2024-07-15 08:03:53.143957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.517 [2024-07-15 08:03:53.143972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.517 qpair failed and we were unable to recover it. 00:28:08.517 [2024-07-15 08:03:53.153890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.517 [2024-07-15 08:03:53.153948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.517 [2024-07-15 08:03:53.153962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.517 [2024-07-15 08:03:53.153969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.517 [2024-07-15 08:03:53.153977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.517 [2024-07-15 08:03:53.153992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.517 qpair failed and we were unable to recover it. 00:28:08.517 [2024-07-15 08:03:53.163957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.517 [2024-07-15 08:03:53.164021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.517 [2024-07-15 08:03:53.164036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.517 [2024-07-15 08:03:53.164042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.517 [2024-07-15 08:03:53.164051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.517 [2024-07-15 08:03:53.164066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.517 qpair failed and we were unable to recover it. 00:28:08.517 [2024-07-15 08:03:53.173962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.517 [2024-07-15 08:03:53.174019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.517 [2024-07-15 08:03:53.174034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.517 [2024-07-15 08:03:53.174040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.517 [2024-07-15 08:03:53.174046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.517 [2024-07-15 08:03:53.174060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.517 qpair failed and we were unable to recover it. 00:28:08.517 [2024-07-15 08:03:53.183970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.517 [2024-07-15 08:03:53.184026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.517 [2024-07-15 08:03:53.184041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.517 [2024-07-15 08:03:53.184047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.517 [2024-07-15 08:03:53.184053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.517 [2024-07-15 08:03:53.184067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.517 qpair failed and we were unable to recover it. 00:28:08.517 [2024-07-15 08:03:53.194072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.517 [2024-07-15 08:03:53.194173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.517 [2024-07-15 08:03:53.194188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.517 [2024-07-15 08:03:53.194195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.517 [2024-07-15 08:03:53.194200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.517 [2024-07-15 08:03:53.194215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.517 qpair failed and we were unable to recover it. 00:28:08.517 [2024-07-15 08:03:53.204044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.517 [2024-07-15 08:03:53.204097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.517 [2024-07-15 08:03:53.204111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.517 [2024-07-15 08:03:53.204117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.517 [2024-07-15 08:03:53.204123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.517 [2024-07-15 08:03:53.204137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.517 qpair failed and we were unable to recover it. 00:28:08.517 [2024-07-15 08:03:53.214048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.517 [2024-07-15 08:03:53.214106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.517 [2024-07-15 08:03:53.214120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.517 [2024-07-15 08:03:53.214127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.517 [2024-07-15 08:03:53.214132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.517 [2024-07-15 08:03:53.214146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.517 qpair failed and we were unable to recover it. 00:28:08.517 [2024-07-15 08:03:53.224092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.517 [2024-07-15 08:03:53.224146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.517 [2024-07-15 08:03:53.224160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.517 [2024-07-15 08:03:53.224167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.517 [2024-07-15 08:03:53.224172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.517 [2024-07-15 08:03:53.224186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.517 qpair failed and we were unable to recover it. 00:28:08.517 [2024-07-15 08:03:53.234110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.517 [2024-07-15 08:03:53.234165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.517 [2024-07-15 08:03:53.234179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.517 [2024-07-15 08:03:53.234185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.517 [2024-07-15 08:03:53.234191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.517 [2024-07-15 08:03:53.234205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.517 qpair failed and we were unable to recover it. 00:28:08.517 [2024-07-15 08:03:53.244139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.517 [2024-07-15 08:03:53.244196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.517 [2024-07-15 08:03:53.244210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.517 [2024-07-15 08:03:53.244217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.517 [2024-07-15 08:03:53.244223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.517 [2024-07-15 08:03:53.244241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.517 qpair failed and we were unable to recover it. 00:28:08.517 [2024-07-15 08:03:53.254148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.517 [2024-07-15 08:03:53.254202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.517 [2024-07-15 08:03:53.254216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.517 [2024-07-15 08:03:53.254232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.517 [2024-07-15 08:03:53.254238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.517 [2024-07-15 08:03:53.254252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.517 qpair failed and we were unable to recover it. 00:28:08.517 [2024-07-15 08:03:53.264198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.517 [2024-07-15 08:03:53.264254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.517 [2024-07-15 08:03:53.264269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.517 [2024-07-15 08:03:53.264276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.517 [2024-07-15 08:03:53.264282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.517 [2024-07-15 08:03:53.264296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.517 qpair failed and we were unable to recover it. 00:28:08.778 [2024-07-15 08:03:53.274238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.778 [2024-07-15 08:03:53.274297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.778 [2024-07-15 08:03:53.274312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.778 [2024-07-15 08:03:53.274319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.778 [2024-07-15 08:03:53.274325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.778 [2024-07-15 08:03:53.274339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.778 qpair failed and we were unable to recover it. 00:28:08.778 [2024-07-15 08:03:53.284240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.778 [2024-07-15 08:03:53.284299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.778 [2024-07-15 08:03:53.284314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.778 [2024-07-15 08:03:53.284321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.778 [2024-07-15 08:03:53.284326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.778 [2024-07-15 08:03:53.284341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.778 qpair failed and we were unable to recover it. 00:28:08.778 [2024-07-15 08:03:53.294296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.778 [2024-07-15 08:03:53.294349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.778 [2024-07-15 08:03:53.294364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.778 [2024-07-15 08:03:53.294371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.778 [2024-07-15 08:03:53.294377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.778 [2024-07-15 08:03:53.294390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.778 qpair failed and we were unable to recover it. 00:28:08.778 [2024-07-15 08:03:53.304359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.778 [2024-07-15 08:03:53.304415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.778 [2024-07-15 08:03:53.304430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.778 [2024-07-15 08:03:53.304436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.778 [2024-07-15 08:03:53.304442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.778 [2024-07-15 08:03:53.304457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.778 qpair failed and we were unable to recover it. 00:28:08.778 [2024-07-15 08:03:53.314352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.778 [2024-07-15 08:03:53.314410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.778 [2024-07-15 08:03:53.314424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.778 [2024-07-15 08:03:53.314430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.778 [2024-07-15 08:03:53.314436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.778 [2024-07-15 08:03:53.314449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.778 qpair failed and we were unable to recover it. 00:28:08.778 [2024-07-15 08:03:53.324377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.778 [2024-07-15 08:03:53.324439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.778 [2024-07-15 08:03:53.324453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.778 [2024-07-15 08:03:53.324459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.778 [2024-07-15 08:03:53.324465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.778 [2024-07-15 08:03:53.324479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.778 qpair failed and we were unable to recover it. 00:28:08.778 [2024-07-15 08:03:53.334401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.778 [2024-07-15 08:03:53.334460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.779 [2024-07-15 08:03:53.334474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.779 [2024-07-15 08:03:53.334481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.779 [2024-07-15 08:03:53.334487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.779 [2024-07-15 08:03:53.334500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.779 qpair failed and we were unable to recover it. 00:28:08.779 [2024-07-15 08:03:53.344413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.779 [2024-07-15 08:03:53.344479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.779 [2024-07-15 08:03:53.344497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.779 [2024-07-15 08:03:53.344504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.779 [2024-07-15 08:03:53.344510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.779 [2024-07-15 08:03:53.344524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.779 qpair failed and we were unable to recover it. 00:28:08.779 [2024-07-15 08:03:53.354463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.779 [2024-07-15 08:03:53.354522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.779 [2024-07-15 08:03:53.354536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.779 [2024-07-15 08:03:53.354543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.779 [2024-07-15 08:03:53.354548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.779 [2024-07-15 08:03:53.354562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.779 qpair failed and we were unable to recover it. 00:28:08.779 [2024-07-15 08:03:53.364487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.779 [2024-07-15 08:03:53.364540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.779 [2024-07-15 08:03:53.364555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.779 [2024-07-15 08:03:53.364562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.779 [2024-07-15 08:03:53.364568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.779 [2024-07-15 08:03:53.364584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.779 qpair failed and we were unable to recover it. 00:28:08.779 [2024-07-15 08:03:53.374510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.779 [2024-07-15 08:03:53.374568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.779 [2024-07-15 08:03:53.374582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.779 [2024-07-15 08:03:53.374589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.779 [2024-07-15 08:03:53.374594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.779 [2024-07-15 08:03:53.374608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.779 qpair failed and we were unable to recover it. 00:28:08.779 [2024-07-15 08:03:53.384529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.779 [2024-07-15 08:03:53.384583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.779 [2024-07-15 08:03:53.384597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.779 [2024-07-15 08:03:53.384603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.779 [2024-07-15 08:03:53.384609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.779 [2024-07-15 08:03:53.384626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.779 qpair failed and we were unable to recover it. 00:28:08.779 [2024-07-15 08:03:53.394612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.779 [2024-07-15 08:03:53.394669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.779 [2024-07-15 08:03:53.394683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.779 [2024-07-15 08:03:53.394690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.779 [2024-07-15 08:03:53.394695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.779 [2024-07-15 08:03:53.394710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.779 qpair failed and we were unable to recover it. 00:28:08.779 [2024-07-15 08:03:53.404595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.779 [2024-07-15 08:03:53.404664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.779 [2024-07-15 08:03:53.404678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.779 [2024-07-15 08:03:53.404684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.779 [2024-07-15 08:03:53.404690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.779 [2024-07-15 08:03:53.404704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.779 qpair failed and we were unable to recover it. 00:28:08.779 [2024-07-15 08:03:53.414643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.779 [2024-07-15 08:03:53.414700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.779 [2024-07-15 08:03:53.414715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.779 [2024-07-15 08:03:53.414721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.779 [2024-07-15 08:03:53.414727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.779 [2024-07-15 08:03:53.414741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.779 qpair failed and we were unable to recover it. 00:28:08.779 [2024-07-15 08:03:53.424695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.779 [2024-07-15 08:03:53.424778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.779 [2024-07-15 08:03:53.424792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.779 [2024-07-15 08:03:53.424798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.779 [2024-07-15 08:03:53.424804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.779 [2024-07-15 08:03:53.424817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.779 qpair failed and we were unable to recover it. 00:28:08.779 [2024-07-15 08:03:53.434736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.779 [2024-07-15 08:03:53.434789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.779 [2024-07-15 08:03:53.434807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.779 [2024-07-15 08:03:53.434814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.779 [2024-07-15 08:03:53.434819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.779 [2024-07-15 08:03:53.434833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.779 qpair failed and we were unable to recover it. 00:28:08.779 [2024-07-15 08:03:53.444697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.779 [2024-07-15 08:03:53.444756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.779 [2024-07-15 08:03:53.444772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.779 [2024-07-15 08:03:53.444779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.779 [2024-07-15 08:03:53.444785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.779 [2024-07-15 08:03:53.444800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.779 qpair failed and we were unable to recover it. 00:28:08.779 [2024-07-15 08:03:53.454686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.779 [2024-07-15 08:03:53.454737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.779 [2024-07-15 08:03:53.454751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.779 [2024-07-15 08:03:53.454758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.779 [2024-07-15 08:03:53.454764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.779 [2024-07-15 08:03:53.454778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.779 qpair failed and we were unable to recover it. 00:28:08.779 [2024-07-15 08:03:53.464758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.779 [2024-07-15 08:03:53.464818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.780 [2024-07-15 08:03:53.464832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.780 [2024-07-15 08:03:53.464839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.780 [2024-07-15 08:03:53.464845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.780 [2024-07-15 08:03:53.464859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.780 qpair failed and we were unable to recover it. 00:28:08.780 [2024-07-15 08:03:53.474855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.780 [2024-07-15 08:03:53.474915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.780 [2024-07-15 08:03:53.474929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.780 [2024-07-15 08:03:53.474936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.780 [2024-07-15 08:03:53.474946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.780 [2024-07-15 08:03:53.474960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.780 qpair failed and we were unable to recover it. 00:28:08.780 [2024-07-15 08:03:53.484855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.780 [2024-07-15 08:03:53.484912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.780 [2024-07-15 08:03:53.484926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.780 [2024-07-15 08:03:53.484933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.780 [2024-07-15 08:03:53.484940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.780 [2024-07-15 08:03:53.484955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.780 qpair failed and we were unable to recover it. 00:28:08.780 [2024-07-15 08:03:53.494882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.780 [2024-07-15 08:03:53.494938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.780 [2024-07-15 08:03:53.494952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.780 [2024-07-15 08:03:53.494958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.780 [2024-07-15 08:03:53.494965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.780 [2024-07-15 08:03:53.494978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.780 qpair failed and we were unable to recover it. 00:28:08.780 [2024-07-15 08:03:53.504838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.780 [2024-07-15 08:03:53.504892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.780 [2024-07-15 08:03:53.504906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.780 [2024-07-15 08:03:53.504913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.780 [2024-07-15 08:03:53.504919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.780 [2024-07-15 08:03:53.504934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.780 qpair failed and we were unable to recover it. 00:28:08.780 [2024-07-15 08:03:53.514930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.780 [2024-07-15 08:03:53.515014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.780 [2024-07-15 08:03:53.515028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.780 [2024-07-15 08:03:53.515035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.780 [2024-07-15 08:03:53.515042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.780 [2024-07-15 08:03:53.515056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.780 qpair failed and we were unable to recover it. 00:28:08.780 [2024-07-15 08:03:53.524970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.780 [2024-07-15 08:03:53.525034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.780 [2024-07-15 08:03:53.525049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.780 [2024-07-15 08:03:53.525056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.780 [2024-07-15 08:03:53.525062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:08.780 [2024-07-15 08:03:53.525076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:08.780 qpair failed and we were unable to recover it. 00:28:09.040 [2024-07-15 08:03:53.534918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.040 [2024-07-15 08:03:53.534977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.040 [2024-07-15 08:03:53.534991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.040 [2024-07-15 08:03:53.534999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.040 [2024-07-15 08:03:53.535005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.040 [2024-07-15 08:03:53.535019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.040 qpair failed and we were unable to recover it. 00:28:09.040 [2024-07-15 08:03:53.545045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.040 [2024-07-15 08:03:53.545100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.040 [2024-07-15 08:03:53.545115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.040 [2024-07-15 08:03:53.545122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.040 [2024-07-15 08:03:53.545129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.040 [2024-07-15 08:03:53.545143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.040 qpair failed and we were unable to recover it. 00:28:09.040 [2024-07-15 08:03:53.555036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.040 [2024-07-15 08:03:53.555098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.040 [2024-07-15 08:03:53.555112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.040 [2024-07-15 08:03:53.555119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.040 [2024-07-15 08:03:53.555126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.040 [2024-07-15 08:03:53.555141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.040 qpair failed and we were unable to recover it. 00:28:09.040 [2024-07-15 08:03:53.565125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.040 [2024-07-15 08:03:53.565210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.040 [2024-07-15 08:03:53.565229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.040 [2024-07-15 08:03:53.565236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.040 [2024-07-15 08:03:53.565245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.040 [2024-07-15 08:03:53.565260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.040 qpair failed and we were unable to recover it. 00:28:09.040 [2024-07-15 08:03:53.575069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.040 [2024-07-15 08:03:53.575139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.040 [2024-07-15 08:03:53.575153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.040 [2024-07-15 08:03:53.575160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.040 [2024-07-15 08:03:53.575166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.040 [2024-07-15 08:03:53.575180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.040 qpair failed and we were unable to recover it. 00:28:09.040 [2024-07-15 08:03:53.585146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.040 [2024-07-15 08:03:53.585202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.040 [2024-07-15 08:03:53.585216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.040 [2024-07-15 08:03:53.585228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.040 [2024-07-15 08:03:53.585234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.040 [2024-07-15 08:03:53.585249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.040 qpair failed and we were unable to recover it. 00:28:09.040 [2024-07-15 08:03:53.595159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.040 [2024-07-15 08:03:53.595219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.040 [2024-07-15 08:03:53.595236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.040 [2024-07-15 08:03:53.595244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.040 [2024-07-15 08:03:53.595250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.040 [2024-07-15 08:03:53.595265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.040 qpair failed and we were unable to recover it. 00:28:09.040 [2024-07-15 08:03:53.605185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.040 [2024-07-15 08:03:53.605246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.040 [2024-07-15 08:03:53.605261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.040 [2024-07-15 08:03:53.605268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.040 [2024-07-15 08:03:53.605275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.040 [2024-07-15 08:03:53.605289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.040 qpair failed and we were unable to recover it. 00:28:09.040 [2024-07-15 08:03:53.615221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.040 [2024-07-15 08:03:53.615276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.040 [2024-07-15 08:03:53.615290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.040 [2024-07-15 08:03:53.615298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.040 [2024-07-15 08:03:53.615304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.040 [2024-07-15 08:03:53.615318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.040 qpair failed and we were unable to recover it. 00:28:09.040 [2024-07-15 08:03:53.625249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.040 [2024-07-15 08:03:53.625300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.040 [2024-07-15 08:03:53.625314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.040 [2024-07-15 08:03:53.625320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.040 [2024-07-15 08:03:53.625327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.040 [2024-07-15 08:03:53.625341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.040 qpair failed and we were unable to recover it. 00:28:09.040 [2024-07-15 08:03:53.635273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.040 [2024-07-15 08:03:53.635342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.040 [2024-07-15 08:03:53.635357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.040 [2024-07-15 08:03:53.635364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.040 [2024-07-15 08:03:53.635370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.040 [2024-07-15 08:03:53.635384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.040 qpair failed and we were unable to recover it. 00:28:09.040 [2024-07-15 08:03:53.645308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.040 [2024-07-15 08:03:53.645364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.040 [2024-07-15 08:03:53.645379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.041 [2024-07-15 08:03:53.645386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.041 [2024-07-15 08:03:53.645392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.041 [2024-07-15 08:03:53.645406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.041 qpair failed and we were unable to recover it. 00:28:09.041 [2024-07-15 08:03:53.655327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.041 [2024-07-15 08:03:53.655390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.041 [2024-07-15 08:03:53.655405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.041 [2024-07-15 08:03:53.655415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.041 [2024-07-15 08:03:53.655421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.041 [2024-07-15 08:03:53.655435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.041 qpair failed and we were unable to recover it. 00:28:09.041 [2024-07-15 08:03:53.665365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.041 [2024-07-15 08:03:53.665419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.041 [2024-07-15 08:03:53.665434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.041 [2024-07-15 08:03:53.665441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.041 [2024-07-15 08:03:53.665448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.041 [2024-07-15 08:03:53.665462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.041 qpair failed and we were unable to recover it. 00:28:09.041 [2024-07-15 08:03:53.675381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.041 [2024-07-15 08:03:53.675439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.041 [2024-07-15 08:03:53.675453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.041 [2024-07-15 08:03:53.675461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.041 [2024-07-15 08:03:53.675467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.041 [2024-07-15 08:03:53.675481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.041 qpair failed and we were unable to recover it. 00:28:09.041 [2024-07-15 08:03:53.685423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.041 [2024-07-15 08:03:53.685478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.041 [2024-07-15 08:03:53.685492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.041 [2024-07-15 08:03:53.685499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.041 [2024-07-15 08:03:53.685506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.041 [2024-07-15 08:03:53.685520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.041 qpair failed and we were unable to recover it. 00:28:09.041 [2024-07-15 08:03:53.695446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.041 [2024-07-15 08:03:53.695503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.041 [2024-07-15 08:03:53.695518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.041 [2024-07-15 08:03:53.695525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.041 [2024-07-15 08:03:53.695531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.041 [2024-07-15 08:03:53.695545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.041 qpair failed and we were unable to recover it. 00:28:09.041 [2024-07-15 08:03:53.705498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.041 [2024-07-15 08:03:53.705565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.041 [2024-07-15 08:03:53.705580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.041 [2024-07-15 08:03:53.705587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.041 [2024-07-15 08:03:53.705593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.041 [2024-07-15 08:03:53.705607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.041 qpair failed and we were unable to recover it. 00:28:09.041 [2024-07-15 08:03:53.715489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.041 [2024-07-15 08:03:53.715572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.041 [2024-07-15 08:03:53.715588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.041 [2024-07-15 08:03:53.715596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.041 [2024-07-15 08:03:53.715602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.041 [2024-07-15 08:03:53.715618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.041 qpair failed and we were unable to recover it. 00:28:09.041 [2024-07-15 08:03:53.725551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.041 [2024-07-15 08:03:53.725603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.041 [2024-07-15 08:03:53.725618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.041 [2024-07-15 08:03:53.725624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.041 [2024-07-15 08:03:53.725631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.041 [2024-07-15 08:03:53.725645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.041 qpair failed and we were unable to recover it. 00:28:09.041 [2024-07-15 08:03:53.735543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.041 [2024-07-15 08:03:53.735598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.041 [2024-07-15 08:03:53.735614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.041 [2024-07-15 08:03:53.735622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.041 [2024-07-15 08:03:53.735628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.041 [2024-07-15 08:03:53.735642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.041 qpair failed and we were unable to recover it. 00:28:09.041 [2024-07-15 08:03:53.745583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.041 [2024-07-15 08:03:53.745640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.041 [2024-07-15 08:03:53.745658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.041 [2024-07-15 08:03:53.745665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.041 [2024-07-15 08:03:53.745671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.041 [2024-07-15 08:03:53.745687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.041 qpair failed and we were unable to recover it. 00:28:09.041 [2024-07-15 08:03:53.755598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.041 [2024-07-15 08:03:53.755652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.041 [2024-07-15 08:03:53.755667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.041 [2024-07-15 08:03:53.755674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.041 [2024-07-15 08:03:53.755681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.041 [2024-07-15 08:03:53.755695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.041 qpair failed and we were unable to recover it. 00:28:09.041 [2024-07-15 08:03:53.765626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.041 [2024-07-15 08:03:53.765684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.041 [2024-07-15 08:03:53.765699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.041 [2024-07-15 08:03:53.765707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.041 [2024-07-15 08:03:53.765714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.041 [2024-07-15 08:03:53.765728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.041 qpair failed and we were unable to recover it. 00:28:09.041 [2024-07-15 08:03:53.775679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.041 [2024-07-15 08:03:53.775735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.041 [2024-07-15 08:03:53.775751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.041 [2024-07-15 08:03:53.775758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.041 [2024-07-15 08:03:53.775765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.041 [2024-07-15 08:03:53.775780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.041 qpair failed and we were unable to recover it. 00:28:09.041 [2024-07-15 08:03:53.785706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.041 [2024-07-15 08:03:53.785759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.041 [2024-07-15 08:03:53.785774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.041 [2024-07-15 08:03:53.785780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.042 [2024-07-15 08:03:53.785787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.042 [2024-07-15 08:03:53.785806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.042 qpair failed and we were unable to recover it. 00:28:09.301 [2024-07-15 08:03:53.795783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.301 [2024-07-15 08:03:53.795866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.301 [2024-07-15 08:03:53.795881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.301 [2024-07-15 08:03:53.795888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.301 [2024-07-15 08:03:53.795894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.301 [2024-07-15 08:03:53.795908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.301 qpair failed and we were unable to recover it. 00:28:09.301 [2024-07-15 08:03:53.805744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.301 [2024-07-15 08:03:53.805795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.301 [2024-07-15 08:03:53.805809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.301 [2024-07-15 08:03:53.805816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.301 [2024-07-15 08:03:53.805823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.302 [2024-07-15 08:03:53.805837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.302 qpair failed and we were unable to recover it. 00:28:09.302 [2024-07-15 08:03:53.815721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.302 [2024-07-15 08:03:53.815775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.302 [2024-07-15 08:03:53.815790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.302 [2024-07-15 08:03:53.815796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.302 [2024-07-15 08:03:53.815802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.302 [2024-07-15 08:03:53.815817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.302 qpair failed and we were unable to recover it. 00:28:09.302 [2024-07-15 08:03:53.825843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.302 [2024-07-15 08:03:53.825895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.302 [2024-07-15 08:03:53.825909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.302 [2024-07-15 08:03:53.825916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.302 [2024-07-15 08:03:53.825923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.302 [2024-07-15 08:03:53.825938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.302 qpair failed and we were unable to recover it. 00:28:09.302 [2024-07-15 08:03:53.835837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.302 [2024-07-15 08:03:53.835895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.302 [2024-07-15 08:03:53.835912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.302 [2024-07-15 08:03:53.835920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.302 [2024-07-15 08:03:53.835926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.302 [2024-07-15 08:03:53.835940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.302 qpair failed and we were unable to recover it. 00:28:09.302 [2024-07-15 08:03:53.845926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.302 [2024-07-15 08:03:53.846014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.302 [2024-07-15 08:03:53.846028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.302 [2024-07-15 08:03:53.846035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.302 [2024-07-15 08:03:53.846042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.302 [2024-07-15 08:03:53.846056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.302 qpair failed and we were unable to recover it. 00:28:09.302 [2024-07-15 08:03:53.855896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.302 [2024-07-15 08:03:53.855953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.302 [2024-07-15 08:03:53.855967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.302 [2024-07-15 08:03:53.855974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.302 [2024-07-15 08:03:53.855981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.302 [2024-07-15 08:03:53.855995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.302 qpair failed and we were unable to recover it. 00:28:09.302 [2024-07-15 08:03:53.865931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.302 [2024-07-15 08:03:53.865999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.302 [2024-07-15 08:03:53.866014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.302 [2024-07-15 08:03:53.866021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.302 [2024-07-15 08:03:53.866027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.302 [2024-07-15 08:03:53.866041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.302 qpair failed and we were unable to recover it. 00:28:09.302 [2024-07-15 08:03:53.875970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.302 [2024-07-15 08:03:53.876049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.302 [2024-07-15 08:03:53.876063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.302 [2024-07-15 08:03:53.876070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.302 [2024-07-15 08:03:53.876076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.302 [2024-07-15 08:03:53.876093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.302 qpair failed and we were unable to recover it. 00:28:09.302 [2024-07-15 08:03:53.885976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.302 [2024-07-15 08:03:53.886033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.302 [2024-07-15 08:03:53.886048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.302 [2024-07-15 08:03:53.886055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.302 [2024-07-15 08:03:53.886061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.302 [2024-07-15 08:03:53.886075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.302 qpair failed and we were unable to recover it. 00:28:09.302 [2024-07-15 08:03:53.896004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.302 [2024-07-15 08:03:53.896060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.302 [2024-07-15 08:03:53.896075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.302 [2024-07-15 08:03:53.896083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.302 [2024-07-15 08:03:53.896089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.302 [2024-07-15 08:03:53.896103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.302 qpair failed and we were unable to recover it. 00:28:09.302 [2024-07-15 08:03:53.906035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.302 [2024-07-15 08:03:53.906100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.302 [2024-07-15 08:03:53.906114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.302 [2024-07-15 08:03:53.906122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.302 [2024-07-15 08:03:53.906127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.302 [2024-07-15 08:03:53.906142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.302 qpair failed and we were unable to recover it. 00:28:09.302 [2024-07-15 08:03:53.916048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.302 [2024-07-15 08:03:53.916107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.302 [2024-07-15 08:03:53.916121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.302 [2024-07-15 08:03:53.916128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.302 [2024-07-15 08:03:53.916133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.302 [2024-07-15 08:03:53.916147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.302 qpair failed and we were unable to recover it. 00:28:09.302 [2024-07-15 08:03:53.926044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.302 [2024-07-15 08:03:53.926136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.302 [2024-07-15 08:03:53.926151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.302 [2024-07-15 08:03:53.926158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.302 [2024-07-15 08:03:53.926164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.302 [2024-07-15 08:03:53.926178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.302 qpair failed and we were unable to recover it. 00:28:09.302 [2024-07-15 08:03:53.936042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.302 [2024-07-15 08:03:53.936140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.302 [2024-07-15 08:03:53.936154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.302 [2024-07-15 08:03:53.936161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.302 [2024-07-15 08:03:53.936167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.302 [2024-07-15 08:03:53.936182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.302 qpair failed and we were unable to recover it. 00:28:09.302 [2024-07-15 08:03:53.946147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.302 [2024-07-15 08:03:53.946199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.302 [2024-07-15 08:03:53.946213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.302 [2024-07-15 08:03:53.946220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.302 [2024-07-15 08:03:53.946230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.302 [2024-07-15 08:03:53.946246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.302 qpair failed and we were unable to recover it. 00:28:09.302 [2024-07-15 08:03:53.956179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.303 [2024-07-15 08:03:53.956245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.303 [2024-07-15 08:03:53.956259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.303 [2024-07-15 08:03:53.956267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.303 [2024-07-15 08:03:53.956273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.303 [2024-07-15 08:03:53.956288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.303 qpair failed and we were unable to recover it. 00:28:09.303 [2024-07-15 08:03:53.966197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.303 [2024-07-15 08:03:53.966259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.303 [2024-07-15 08:03:53.966274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.303 [2024-07-15 08:03:53.966281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.303 [2024-07-15 08:03:53.966291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.303 [2024-07-15 08:03:53.966307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.303 qpair failed and we were unable to recover it. 00:28:09.303 [2024-07-15 08:03:53.976221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.303 [2024-07-15 08:03:53.976275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.303 [2024-07-15 08:03:53.976289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.303 [2024-07-15 08:03:53.976296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.303 [2024-07-15 08:03:53.976303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.303 [2024-07-15 08:03:53.976317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.303 qpair failed and we were unable to recover it. 00:28:09.303 [2024-07-15 08:03:53.986269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.303 [2024-07-15 08:03:53.986336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.303 [2024-07-15 08:03:53.986350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.303 [2024-07-15 08:03:53.986357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.303 [2024-07-15 08:03:53.986363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.303 [2024-07-15 08:03:53.986378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.303 qpair failed and we were unable to recover it. 00:28:09.303 [2024-07-15 08:03:53.996283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.303 [2024-07-15 08:03:53.996341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.303 [2024-07-15 08:03:53.996356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.303 [2024-07-15 08:03:53.996363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.303 [2024-07-15 08:03:53.996370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.303 [2024-07-15 08:03:53.996385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.303 qpair failed and we were unable to recover it. 00:28:09.303 [2024-07-15 08:03:54.006373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.303 [2024-07-15 08:03:54.006426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.303 [2024-07-15 08:03:54.006441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.303 [2024-07-15 08:03:54.006448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.303 [2024-07-15 08:03:54.006454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.303 [2024-07-15 08:03:54.006468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.303 qpair failed and we were unable to recover it. 00:28:09.303 [2024-07-15 08:03:54.016333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.303 [2024-07-15 08:03:54.016410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.303 [2024-07-15 08:03:54.016427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.303 [2024-07-15 08:03:54.016435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.303 [2024-07-15 08:03:54.016441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.303 [2024-07-15 08:03:54.016456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.303 qpair failed and we were unable to recover it. 00:28:09.303 [2024-07-15 08:03:54.026385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.303 [2024-07-15 08:03:54.026487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.303 [2024-07-15 08:03:54.026502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.303 [2024-07-15 08:03:54.026511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.303 [2024-07-15 08:03:54.026517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.303 [2024-07-15 08:03:54.026532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.303 qpair failed and we were unable to recover it. 00:28:09.303 [2024-07-15 08:03:54.036417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.303 [2024-07-15 08:03:54.036521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.303 [2024-07-15 08:03:54.036535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.303 [2024-07-15 08:03:54.036542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.303 [2024-07-15 08:03:54.036548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.303 [2024-07-15 08:03:54.036563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.303 qpair failed and we were unable to recover it. 00:28:09.303 [2024-07-15 08:03:54.046434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.303 [2024-07-15 08:03:54.046485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.303 [2024-07-15 08:03:54.046500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.303 [2024-07-15 08:03:54.046507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.303 [2024-07-15 08:03:54.046513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.303 [2024-07-15 08:03:54.046528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.303 qpair failed and we were unable to recover it. 00:28:09.563 [2024-07-15 08:03:54.056459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.563 [2024-07-15 08:03:54.056589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.563 [2024-07-15 08:03:54.056605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.563 [2024-07-15 08:03:54.056616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.563 [2024-07-15 08:03:54.056623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.563 [2024-07-15 08:03:54.056639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.563 qpair failed and we were unable to recover it. 00:28:09.563 [2024-07-15 08:03:54.066523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.563 [2024-07-15 08:03:54.066590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.563 [2024-07-15 08:03:54.066605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.563 [2024-07-15 08:03:54.066612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.563 [2024-07-15 08:03:54.066619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.563 [2024-07-15 08:03:54.066633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.563 qpair failed and we were unable to recover it. 00:28:09.563 [2024-07-15 08:03:54.076592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.563 [2024-07-15 08:03:54.076652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.563 [2024-07-15 08:03:54.076666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.563 [2024-07-15 08:03:54.076673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.563 [2024-07-15 08:03:54.076680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.563 [2024-07-15 08:03:54.076693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.563 qpair failed and we were unable to recover it. 00:28:09.563 [2024-07-15 08:03:54.086587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.563 [2024-07-15 08:03:54.086648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.563 [2024-07-15 08:03:54.086662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.563 [2024-07-15 08:03:54.086669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.563 [2024-07-15 08:03:54.086676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.563 [2024-07-15 08:03:54.086692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.563 qpair failed and we were unable to recover it. 00:28:09.563 [2024-07-15 08:03:54.096608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.563 [2024-07-15 08:03:54.096664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.563 [2024-07-15 08:03:54.096679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.563 [2024-07-15 08:03:54.096687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.563 [2024-07-15 08:03:54.096693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.563 [2024-07-15 08:03:54.096707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.563 qpair failed and we were unable to recover it. 00:28:09.563 [2024-07-15 08:03:54.106615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.563 [2024-07-15 08:03:54.106666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.563 [2024-07-15 08:03:54.106680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.563 [2024-07-15 08:03:54.106687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.563 [2024-07-15 08:03:54.106693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.563 [2024-07-15 08:03:54.106708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.563 qpair failed and we were unable to recover it. 00:28:09.563 [2024-07-15 08:03:54.116640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.563 [2024-07-15 08:03:54.116702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.563 [2024-07-15 08:03:54.116716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.563 [2024-07-15 08:03:54.116724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.563 [2024-07-15 08:03:54.116730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.563 [2024-07-15 08:03:54.116745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.563 qpair failed and we were unable to recover it. 00:28:09.563 [2024-07-15 08:03:54.126660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.563 [2024-07-15 08:03:54.126729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.563 [2024-07-15 08:03:54.126743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.563 [2024-07-15 08:03:54.126750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.563 [2024-07-15 08:03:54.126755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.563 [2024-07-15 08:03:54.126769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.563 qpair failed and we were unable to recover it. 00:28:09.563 [2024-07-15 08:03:54.136639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.563 [2024-07-15 08:03:54.136692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.563 [2024-07-15 08:03:54.136706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.563 [2024-07-15 08:03:54.136713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.563 [2024-07-15 08:03:54.136719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.563 [2024-07-15 08:03:54.136733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.563 qpair failed and we were unable to recover it. 00:28:09.563 [2024-07-15 08:03:54.146722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.563 [2024-07-15 08:03:54.146773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.563 [2024-07-15 08:03:54.146791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.563 [2024-07-15 08:03:54.146798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.563 [2024-07-15 08:03:54.146806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.563 [2024-07-15 08:03:54.146820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.563 qpair failed and we were unable to recover it. 00:28:09.563 [2024-07-15 08:03:54.156668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.563 [2024-07-15 08:03:54.156724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.563 [2024-07-15 08:03:54.156739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.563 [2024-07-15 08:03:54.156747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.563 [2024-07-15 08:03:54.156753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.563 [2024-07-15 08:03:54.156767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.563 qpair failed and we were unable to recover it. 00:28:09.563 [2024-07-15 08:03:54.166745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.563 [2024-07-15 08:03:54.166804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.563 [2024-07-15 08:03:54.166819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.563 [2024-07-15 08:03:54.166827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.563 [2024-07-15 08:03:54.166833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.563 [2024-07-15 08:03:54.166848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.563 qpair failed and we were unable to recover it. 00:28:09.563 [2024-07-15 08:03:54.176790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.564 [2024-07-15 08:03:54.176850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.564 [2024-07-15 08:03:54.176864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.564 [2024-07-15 08:03:54.176871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.564 [2024-07-15 08:03:54.176877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.564 [2024-07-15 08:03:54.176891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.564 qpair failed and we were unable to recover it. 00:28:09.564 [2024-07-15 08:03:54.186752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.564 [2024-07-15 08:03:54.186815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.564 [2024-07-15 08:03:54.186830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.564 [2024-07-15 08:03:54.186837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.564 [2024-07-15 08:03:54.186843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.564 [2024-07-15 08:03:54.186857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.564 qpair failed and we were unable to recover it. 00:28:09.564 [2024-07-15 08:03:54.196849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.564 [2024-07-15 08:03:54.196908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.564 [2024-07-15 08:03:54.196923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.564 [2024-07-15 08:03:54.196930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.564 [2024-07-15 08:03:54.196936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.564 [2024-07-15 08:03:54.196950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.564 qpair failed and we were unable to recover it. 00:28:09.564 [2024-07-15 08:03:54.206895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.564 [2024-07-15 08:03:54.206956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.564 [2024-07-15 08:03:54.206970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.564 [2024-07-15 08:03:54.206978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.564 [2024-07-15 08:03:54.206984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.564 [2024-07-15 08:03:54.206999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.564 qpair failed and we were unable to recover it. 00:28:09.564 [2024-07-15 08:03:54.216902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.564 [2024-07-15 08:03:54.216954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.564 [2024-07-15 08:03:54.216969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.564 [2024-07-15 08:03:54.216976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.564 [2024-07-15 08:03:54.216983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.564 [2024-07-15 08:03:54.216997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.564 qpair failed and we were unable to recover it. 00:28:09.564 [2024-07-15 08:03:54.226920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.564 [2024-07-15 08:03:54.226976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.564 [2024-07-15 08:03:54.226990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.564 [2024-07-15 08:03:54.226998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.564 [2024-07-15 08:03:54.227004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.564 [2024-07-15 08:03:54.227018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.564 qpair failed and we were unable to recover it. 00:28:09.564 [2024-07-15 08:03:54.236970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.564 [2024-07-15 08:03:54.237027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.564 [2024-07-15 08:03:54.237044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.564 [2024-07-15 08:03:54.237051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.564 [2024-07-15 08:03:54.237057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.564 [2024-07-15 08:03:54.237071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.564 qpair failed and we were unable to recover it. 00:28:09.564 [2024-07-15 08:03:54.246980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.564 [2024-07-15 08:03:54.247039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.564 [2024-07-15 08:03:54.247053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.564 [2024-07-15 08:03:54.247060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.564 [2024-07-15 08:03:54.247066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.564 [2024-07-15 08:03:54.247080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.564 qpair failed and we were unable to recover it. 00:28:09.564 [2024-07-15 08:03:54.257019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.564 [2024-07-15 08:03:54.257075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.564 [2024-07-15 08:03:54.257090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.564 [2024-07-15 08:03:54.257097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.564 [2024-07-15 08:03:54.257103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.564 [2024-07-15 08:03:54.257117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.564 qpair failed and we were unable to recover it. 00:28:09.564 [2024-07-15 08:03:54.267051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.564 [2024-07-15 08:03:54.267109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.564 [2024-07-15 08:03:54.267123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.564 [2024-07-15 08:03:54.267131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.564 [2024-07-15 08:03:54.267137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.564 [2024-07-15 08:03:54.267151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.564 qpair failed and we were unable to recover it. 00:28:09.564 [2024-07-15 08:03:54.277077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.564 [2024-07-15 08:03:54.277140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.564 [2024-07-15 08:03:54.277154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.564 [2024-07-15 08:03:54.277161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.564 [2024-07-15 08:03:54.277167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.564 [2024-07-15 08:03:54.277185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.564 qpair failed and we were unable to recover it. 00:28:09.564 [2024-07-15 08:03:54.287091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.564 [2024-07-15 08:03:54.287147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.564 [2024-07-15 08:03:54.287161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.564 [2024-07-15 08:03:54.287168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.564 [2024-07-15 08:03:54.287174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.564 [2024-07-15 08:03:54.287188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.564 qpair failed and we were unable to recover it. 00:28:09.564 [2024-07-15 08:03:54.297124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.564 [2024-07-15 08:03:54.297179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.564 [2024-07-15 08:03:54.297195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.564 [2024-07-15 08:03:54.297202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.564 [2024-07-15 08:03:54.297209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.564 [2024-07-15 08:03:54.297228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.564 qpair failed and we were unable to recover it. 00:28:09.564 [2024-07-15 08:03:54.307178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.564 [2024-07-15 08:03:54.307240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.564 [2024-07-15 08:03:54.307255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.564 [2024-07-15 08:03:54.307262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.564 [2024-07-15 08:03:54.307269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.564 [2024-07-15 08:03:54.307283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.564 qpair failed and we were unable to recover it. 00:28:09.824 [2024-07-15 08:03:54.317180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.824 [2024-07-15 08:03:54.317247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.824 [2024-07-15 08:03:54.317261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.824 [2024-07-15 08:03:54.317270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.824 [2024-07-15 08:03:54.317276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.824 [2024-07-15 08:03:54.317290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.824 qpair failed and we were unable to recover it. 00:28:09.824 [2024-07-15 08:03:54.327199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.824 [2024-07-15 08:03:54.327257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.824 [2024-07-15 08:03:54.327274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.824 [2024-07-15 08:03:54.327282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.824 [2024-07-15 08:03:54.327288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.824 [2024-07-15 08:03:54.327302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.824 qpair failed and we were unable to recover it. 00:28:09.824 [2024-07-15 08:03:54.337214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.824 [2024-07-15 08:03:54.337269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.824 [2024-07-15 08:03:54.337284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.824 [2024-07-15 08:03:54.337291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.825 [2024-07-15 08:03:54.337298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.825 [2024-07-15 08:03:54.337313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.825 qpair failed and we were unable to recover it. 00:28:09.825 [2024-07-15 08:03:54.347275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.825 [2024-07-15 08:03:54.347326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.825 [2024-07-15 08:03:54.347341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.825 [2024-07-15 08:03:54.347348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.825 [2024-07-15 08:03:54.347355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.825 [2024-07-15 08:03:54.347370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.825 qpair failed and we were unable to recover it. 00:28:09.825 [2024-07-15 08:03:54.357294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.825 [2024-07-15 08:03:54.357347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.825 [2024-07-15 08:03:54.357361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.825 [2024-07-15 08:03:54.357368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.825 [2024-07-15 08:03:54.357375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.825 [2024-07-15 08:03:54.357389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.825 qpair failed and we were unable to recover it. 00:28:09.825 [2024-07-15 08:03:54.367320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.825 [2024-07-15 08:03:54.367381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.825 [2024-07-15 08:03:54.367396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.825 [2024-07-15 08:03:54.367403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.825 [2024-07-15 08:03:54.367412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.825 [2024-07-15 08:03:54.367427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.825 qpair failed and we were unable to recover it. 00:28:09.825 [2024-07-15 08:03:54.377353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.825 [2024-07-15 08:03:54.377411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.825 [2024-07-15 08:03:54.377425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.825 [2024-07-15 08:03:54.377433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.825 [2024-07-15 08:03:54.377439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.825 [2024-07-15 08:03:54.377453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.825 qpair failed and we were unable to recover it. 00:28:09.825 [2024-07-15 08:03:54.387403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.825 [2024-07-15 08:03:54.387461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.825 [2024-07-15 08:03:54.387475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.825 [2024-07-15 08:03:54.387483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.825 [2024-07-15 08:03:54.387490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.825 [2024-07-15 08:03:54.387505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.825 qpair failed and we were unable to recover it. 00:28:09.825 [2024-07-15 08:03:54.397455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.825 [2024-07-15 08:03:54.397508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.825 [2024-07-15 08:03:54.397523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.825 [2024-07-15 08:03:54.397530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.825 [2024-07-15 08:03:54.397537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.825 [2024-07-15 08:03:54.397552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.825 qpair failed and we were unable to recover it. 00:28:09.825 [2024-07-15 08:03:54.407434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.825 [2024-07-15 08:03:54.407493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.825 [2024-07-15 08:03:54.407507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.825 [2024-07-15 08:03:54.407514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.825 [2024-07-15 08:03:54.407520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.825 [2024-07-15 08:03:54.407535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.825 qpair failed and we were unable to recover it. 00:28:09.825 [2024-07-15 08:03:54.417497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.825 [2024-07-15 08:03:54.417551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.825 [2024-07-15 08:03:54.417566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.825 [2024-07-15 08:03:54.417573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.825 [2024-07-15 08:03:54.417579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.825 [2024-07-15 08:03:54.417594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.825 qpair failed and we were unable to recover it. 00:28:09.825 [2024-07-15 08:03:54.427500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.825 [2024-07-15 08:03:54.427556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.825 [2024-07-15 08:03:54.427571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.825 [2024-07-15 08:03:54.427578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.825 [2024-07-15 08:03:54.427584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.825 [2024-07-15 08:03:54.427598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.825 qpair failed and we were unable to recover it. 00:28:09.825 [2024-07-15 08:03:54.437521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.825 [2024-07-15 08:03:54.437591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.825 [2024-07-15 08:03:54.437605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.825 [2024-07-15 08:03:54.437612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.825 [2024-07-15 08:03:54.437618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.825 [2024-07-15 08:03:54.437632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.825 qpair failed and we were unable to recover it. 00:28:09.825 [2024-07-15 08:03:54.447547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.825 [2024-07-15 08:03:54.447604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.825 [2024-07-15 08:03:54.447619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.825 [2024-07-15 08:03:54.447626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.825 [2024-07-15 08:03:54.447633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.825 [2024-07-15 08:03:54.447648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.826 qpair failed and we were unable to recover it. 00:28:09.826 [2024-07-15 08:03:54.457577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.826 [2024-07-15 08:03:54.457636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.826 [2024-07-15 08:03:54.457652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.826 [2024-07-15 08:03:54.457663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.826 [2024-07-15 08:03:54.457669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.826 [2024-07-15 08:03:54.457683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.826 qpair failed and we were unable to recover it. 00:28:09.826 [2024-07-15 08:03:54.467622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.826 [2024-07-15 08:03:54.467675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.826 [2024-07-15 08:03:54.467690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.826 [2024-07-15 08:03:54.467697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.826 [2024-07-15 08:03:54.467703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.826 [2024-07-15 08:03:54.467719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.826 qpair failed and we were unable to recover it. 00:28:09.826 [2024-07-15 08:03:54.477647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.826 [2024-07-15 08:03:54.477704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.826 [2024-07-15 08:03:54.477719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.826 [2024-07-15 08:03:54.477725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.826 [2024-07-15 08:03:54.477732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.826 [2024-07-15 08:03:54.477746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.826 qpair failed and we were unable to recover it. 00:28:09.826 [2024-07-15 08:03:54.487591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.826 [2024-07-15 08:03:54.487646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.826 [2024-07-15 08:03:54.487661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.826 [2024-07-15 08:03:54.487668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.826 [2024-07-15 08:03:54.487674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.826 [2024-07-15 08:03:54.487689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.826 qpair failed and we were unable to recover it. 00:28:09.826 [2024-07-15 08:03:54.497698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.826 [2024-07-15 08:03:54.497756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.826 [2024-07-15 08:03:54.497771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.826 [2024-07-15 08:03:54.497778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.826 [2024-07-15 08:03:54.497784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.826 [2024-07-15 08:03:54.497798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.826 qpair failed and we were unable to recover it. 00:28:09.826 [2024-07-15 08:03:54.507731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.826 [2024-07-15 08:03:54.507792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.826 [2024-07-15 08:03:54.507808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.826 [2024-07-15 08:03:54.507814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.826 [2024-07-15 08:03:54.507821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.826 [2024-07-15 08:03:54.507835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.826 qpair failed and we were unable to recover it. 00:28:09.826 [2024-07-15 08:03:54.517756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.826 [2024-07-15 08:03:54.517809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.826 [2024-07-15 08:03:54.517823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.826 [2024-07-15 08:03:54.517830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.826 [2024-07-15 08:03:54.517836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.826 [2024-07-15 08:03:54.517851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.826 qpair failed and we were unable to recover it. 00:28:09.826 [2024-07-15 08:03:54.527784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.826 [2024-07-15 08:03:54.527846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.826 [2024-07-15 08:03:54.527861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.826 [2024-07-15 08:03:54.527868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.826 [2024-07-15 08:03:54.527874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.826 [2024-07-15 08:03:54.527889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.826 qpair failed and we were unable to recover it. 00:28:09.826 [2024-07-15 08:03:54.537814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.826 [2024-07-15 08:03:54.537865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.826 [2024-07-15 08:03:54.537879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.826 [2024-07-15 08:03:54.537886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.826 [2024-07-15 08:03:54.537892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.826 [2024-07-15 08:03:54.537906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.826 qpair failed and we were unable to recover it. 00:28:09.826 [2024-07-15 08:03:54.547848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.826 [2024-07-15 08:03:54.547900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.826 [2024-07-15 08:03:54.547915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.826 [2024-07-15 08:03:54.547925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.826 [2024-07-15 08:03:54.547931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.826 [2024-07-15 08:03:54.547946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.826 qpair failed and we were unable to recover it. 00:28:09.826 [2024-07-15 08:03:54.557884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.826 [2024-07-15 08:03:54.557944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.826 [2024-07-15 08:03:54.557958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.826 [2024-07-15 08:03:54.557965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.826 [2024-07-15 08:03:54.557971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.826 [2024-07-15 08:03:54.557985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.826 qpair failed and we were unable to recover it. 00:28:09.826 [2024-07-15 08:03:54.567898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.827 [2024-07-15 08:03:54.567954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.827 [2024-07-15 08:03:54.567968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.827 [2024-07-15 08:03:54.567975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.827 [2024-07-15 08:03:54.567982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:09.827 [2024-07-15 08:03:54.567996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.827 qpair failed and we were unable to recover it. 00:28:10.086 [2024-07-15 08:03:54.577952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.086 [2024-07-15 08:03:54.578043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.086 [2024-07-15 08:03:54.578058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.086 [2024-07-15 08:03:54.578066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.086 [2024-07-15 08:03:54.578072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.087 [2024-07-15 08:03:54.578087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.087 qpair failed and we were unable to recover it. 00:28:10.087 [2024-07-15 08:03:54.587972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.087 [2024-07-15 08:03:54.588032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.087 [2024-07-15 08:03:54.588047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.087 [2024-07-15 08:03:54.588054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.087 [2024-07-15 08:03:54.588060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.087 [2024-07-15 08:03:54.588074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.087 qpair failed and we were unable to recover it. 00:28:10.087 [2024-07-15 08:03:54.598008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.087 [2024-07-15 08:03:54.598069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.087 [2024-07-15 08:03:54.598085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.087 [2024-07-15 08:03:54.598092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.087 [2024-07-15 08:03:54.598098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.087 [2024-07-15 08:03:54.598113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.087 qpair failed and we were unable to recover it. 00:28:10.087 [2024-07-15 08:03:54.608065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.087 [2024-07-15 08:03:54.608121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.087 [2024-07-15 08:03:54.608136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.087 [2024-07-15 08:03:54.608143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.087 [2024-07-15 08:03:54.608149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.087 [2024-07-15 08:03:54.608164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.087 qpair failed and we were unable to recover it. 00:28:10.087 [2024-07-15 08:03:54.618051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.087 [2024-07-15 08:03:54.618113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.087 [2024-07-15 08:03:54.618127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.087 [2024-07-15 08:03:54.618134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.087 [2024-07-15 08:03:54.618140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.087 [2024-07-15 08:03:54.618155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.087 qpair failed and we were unable to recover it. 00:28:10.087 [2024-07-15 08:03:54.628080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.087 [2024-07-15 08:03:54.628136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.087 [2024-07-15 08:03:54.628150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.087 [2024-07-15 08:03:54.628159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.087 [2024-07-15 08:03:54.628165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.087 [2024-07-15 08:03:54.628179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.087 qpair failed and we were unable to recover it. 00:28:10.087 [2024-07-15 08:03:54.638111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.087 [2024-07-15 08:03:54.638170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.087 [2024-07-15 08:03:54.638187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.087 [2024-07-15 08:03:54.638195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.087 [2024-07-15 08:03:54.638201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.087 [2024-07-15 08:03:54.638215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.087 qpair failed and we were unable to recover it. 00:28:10.087 [2024-07-15 08:03:54.648159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.087 [2024-07-15 08:03:54.648217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.087 [2024-07-15 08:03:54.648236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.087 [2024-07-15 08:03:54.648243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.087 [2024-07-15 08:03:54.648250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.087 [2024-07-15 08:03:54.648264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.087 qpair failed and we were unable to recover it. 00:28:10.087 [2024-07-15 08:03:54.658200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.087 [2024-07-15 08:03:54.658261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.087 [2024-07-15 08:03:54.658277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.087 [2024-07-15 08:03:54.658284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.087 [2024-07-15 08:03:54.658290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.087 [2024-07-15 08:03:54.658305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.087 qpair failed and we were unable to recover it. 00:28:10.087 [2024-07-15 08:03:54.668194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.087 [2024-07-15 08:03:54.668251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.087 [2024-07-15 08:03:54.668266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.087 [2024-07-15 08:03:54.668274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.087 [2024-07-15 08:03:54.668281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.087 [2024-07-15 08:03:54.668295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.087 qpair failed and we were unable to recover it. 00:28:10.087 [2024-07-15 08:03:54.678258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.087 [2024-07-15 08:03:54.678361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.087 [2024-07-15 08:03:54.678375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.087 [2024-07-15 08:03:54.678382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.087 [2024-07-15 08:03:54.678390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.087 [2024-07-15 08:03:54.678408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.087 qpair failed and we were unable to recover it. 00:28:10.087 [2024-07-15 08:03:54.688333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.087 [2024-07-15 08:03:54.688418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.087 [2024-07-15 08:03:54.688433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.087 [2024-07-15 08:03:54.688441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.087 [2024-07-15 08:03:54.688447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.087 [2024-07-15 08:03:54.688461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.087 qpair failed and we were unable to recover it. 00:28:10.087 [2024-07-15 08:03:54.698284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.087 [2024-07-15 08:03:54.698339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.087 [2024-07-15 08:03:54.698354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.087 [2024-07-15 08:03:54.698361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.087 [2024-07-15 08:03:54.698368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.087 [2024-07-15 08:03:54.698382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.087 qpair failed and we were unable to recover it. 00:28:10.087 [2024-07-15 08:03:54.708314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.087 [2024-07-15 08:03:54.708371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.087 [2024-07-15 08:03:54.708386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.087 [2024-07-15 08:03:54.708393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.087 [2024-07-15 08:03:54.708400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.087 [2024-07-15 08:03:54.708414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.087 qpair failed and we were unable to recover it. 00:28:10.087 [2024-07-15 08:03:54.718317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.087 [2024-07-15 08:03:54.718375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.087 [2024-07-15 08:03:54.718398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.087 [2024-07-15 08:03:54.718405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.087 [2024-07-15 08:03:54.718411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.087 [2024-07-15 08:03:54.718429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.087 qpair failed and we were unable to recover it. 00:28:10.087 [2024-07-15 08:03:54.728381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.088 [2024-07-15 08:03:54.728438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.088 [2024-07-15 08:03:54.728459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.088 [2024-07-15 08:03:54.728467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.088 [2024-07-15 08:03:54.728473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.088 [2024-07-15 08:03:54.728488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.088 qpair failed and we were unable to recover it. 00:28:10.088 [2024-07-15 08:03:54.738437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.088 [2024-07-15 08:03:54.738526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.088 [2024-07-15 08:03:54.738541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.088 [2024-07-15 08:03:54.738548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.088 [2024-07-15 08:03:54.738554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.088 [2024-07-15 08:03:54.738569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.088 qpair failed and we were unable to recover it. 00:28:10.088 [2024-07-15 08:03:54.748381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.088 [2024-07-15 08:03:54.748441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.088 [2024-07-15 08:03:54.748456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.088 [2024-07-15 08:03:54.748464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.088 [2024-07-15 08:03:54.748470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.088 [2024-07-15 08:03:54.748485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.088 qpair failed and we were unable to recover it. 00:28:10.088 [2024-07-15 08:03:54.758403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.088 [2024-07-15 08:03:54.758462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.088 [2024-07-15 08:03:54.758477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.088 [2024-07-15 08:03:54.758484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.088 [2024-07-15 08:03:54.758491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.088 [2024-07-15 08:03:54.758505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.088 qpair failed and we were unable to recover it. 00:28:10.088 [2024-07-15 08:03:54.768433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.088 [2024-07-15 08:03:54.768495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.088 [2024-07-15 08:03:54.768510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.088 [2024-07-15 08:03:54.768518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.088 [2024-07-15 08:03:54.768527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.088 [2024-07-15 08:03:54.768542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.088 qpair failed and we were unable to recover it. 00:28:10.088 [2024-07-15 08:03:54.778534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.088 [2024-07-15 08:03:54.778588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.088 [2024-07-15 08:03:54.778602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.088 [2024-07-15 08:03:54.778611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.088 [2024-07-15 08:03:54.778617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.088 [2024-07-15 08:03:54.778632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.088 qpair failed and we were unable to recover it. 00:28:10.088 [2024-07-15 08:03:54.788582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.088 [2024-07-15 08:03:54.788641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.088 [2024-07-15 08:03:54.788655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.088 [2024-07-15 08:03:54.788663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.088 [2024-07-15 08:03:54.788670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.088 [2024-07-15 08:03:54.788685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.088 qpair failed and we were unable to recover it. 00:28:10.088 [2024-07-15 08:03:54.798628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.088 [2024-07-15 08:03:54.798686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.088 [2024-07-15 08:03:54.798701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.088 [2024-07-15 08:03:54.798708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.088 [2024-07-15 08:03:54.798715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.088 [2024-07-15 08:03:54.798730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.088 qpair failed and we were unable to recover it. 00:28:10.088 [2024-07-15 08:03:54.808612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.088 [2024-07-15 08:03:54.808676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.088 [2024-07-15 08:03:54.808692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.088 [2024-07-15 08:03:54.808699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.088 [2024-07-15 08:03:54.808706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.088 [2024-07-15 08:03:54.808720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.088 qpair failed and we were unable to recover it. 00:28:10.088 [2024-07-15 08:03:54.818639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.088 [2024-07-15 08:03:54.818698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.088 [2024-07-15 08:03:54.818713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.088 [2024-07-15 08:03:54.818720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.088 [2024-07-15 08:03:54.818727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.088 [2024-07-15 08:03:54.818742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.088 qpair failed and we were unable to recover it. 00:28:10.088 [2024-07-15 08:03:54.828667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.088 [2024-07-15 08:03:54.828749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.088 [2024-07-15 08:03:54.828764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.088 [2024-07-15 08:03:54.828771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.088 [2024-07-15 08:03:54.828777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.088 [2024-07-15 08:03:54.828791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.088 qpair failed and we were unable to recover it. 00:28:10.348 [2024-07-15 08:03:54.838693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.348 [2024-07-15 08:03:54.838751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.348 [2024-07-15 08:03:54.838765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.348 [2024-07-15 08:03:54.838772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.348 [2024-07-15 08:03:54.838779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.348 [2024-07-15 08:03:54.838793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.348 qpair failed and we were unable to recover it. 00:28:10.348 [2024-07-15 08:03:54.848659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.348 [2024-07-15 08:03:54.848717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.348 [2024-07-15 08:03:54.848732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.348 [2024-07-15 08:03:54.848740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.348 [2024-07-15 08:03:54.848747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.348 [2024-07-15 08:03:54.848761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.348 qpair failed and we were unable to recover it. 00:28:10.348 [2024-07-15 08:03:54.858751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.348 [2024-07-15 08:03:54.858808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.348 [2024-07-15 08:03:54.858823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.348 [2024-07-15 08:03:54.858830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.348 [2024-07-15 08:03:54.858840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.348 [2024-07-15 08:03:54.858855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.348 qpair failed and we were unable to recover it. 00:28:10.348 [2024-07-15 08:03:54.868780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.348 [2024-07-15 08:03:54.868871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.348 [2024-07-15 08:03:54.868886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.348 [2024-07-15 08:03:54.868894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.348 [2024-07-15 08:03:54.868900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.348 [2024-07-15 08:03:54.868916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.348 qpair failed and we were unable to recover it. 00:28:10.348 [2024-07-15 08:03:54.878752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.348 [2024-07-15 08:03:54.878809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.348 [2024-07-15 08:03:54.878824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.348 [2024-07-15 08:03:54.878831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.348 [2024-07-15 08:03:54.878838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.349 [2024-07-15 08:03:54.878853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.349 qpair failed and we were unable to recover it. 00:28:10.349 [2024-07-15 08:03:54.888819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.349 [2024-07-15 08:03:54.888877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.349 [2024-07-15 08:03:54.888892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.349 [2024-07-15 08:03:54.888898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.349 [2024-07-15 08:03:54.888905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.349 [2024-07-15 08:03:54.888919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.349 qpair failed and we were unable to recover it. 00:28:10.349 [2024-07-15 08:03:54.898872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.349 [2024-07-15 08:03:54.898931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.349 [2024-07-15 08:03:54.898945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.349 [2024-07-15 08:03:54.898952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.349 [2024-07-15 08:03:54.898958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.349 [2024-07-15 08:03:54.898973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.349 qpair failed and we were unable to recover it. 00:28:10.349 [2024-07-15 08:03:54.908925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.349 [2024-07-15 08:03:54.908989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.349 [2024-07-15 08:03:54.909004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.349 [2024-07-15 08:03:54.909011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.349 [2024-07-15 08:03:54.909017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.349 [2024-07-15 08:03:54.909032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.349 qpair failed and we were unable to recover it. 00:28:10.349 [2024-07-15 08:03:54.918920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.349 [2024-07-15 08:03:54.918977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.349 [2024-07-15 08:03:54.918991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.349 [2024-07-15 08:03:54.918998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.349 [2024-07-15 08:03:54.919005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.349 [2024-07-15 08:03:54.919019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.349 qpair failed and we were unable to recover it. 00:28:10.349 [2024-07-15 08:03:54.928873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.349 [2024-07-15 08:03:54.928931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.349 [2024-07-15 08:03:54.928945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.349 [2024-07-15 08:03:54.928952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.349 [2024-07-15 08:03:54.928959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.349 [2024-07-15 08:03:54.928973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.349 qpair failed and we were unable to recover it. 00:28:10.349 [2024-07-15 08:03:54.938910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.349 [2024-07-15 08:03:54.938968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.349 [2024-07-15 08:03:54.938982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.349 [2024-07-15 08:03:54.938989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.349 [2024-07-15 08:03:54.938995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.349 [2024-07-15 08:03:54.939010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.349 qpair failed and we were unable to recover it. 00:28:10.349 [2024-07-15 08:03:54.949002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.349 [2024-07-15 08:03:54.949056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.349 [2024-07-15 08:03:54.949071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.349 [2024-07-15 08:03:54.949081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.349 [2024-07-15 08:03:54.949088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.349 [2024-07-15 08:03:54.949102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.349 qpair failed and we were unable to recover it. 00:28:10.349 [2024-07-15 08:03:54.959041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.349 [2024-07-15 08:03:54.959099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.349 [2024-07-15 08:03:54.959114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.349 [2024-07-15 08:03:54.959121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.349 [2024-07-15 08:03:54.959128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.349 [2024-07-15 08:03:54.959142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.349 qpair failed and we were unable to recover it. 00:28:10.349 [2024-07-15 08:03:54.969087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.349 [2024-07-15 08:03:54.969150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.349 [2024-07-15 08:03:54.969167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.349 [2024-07-15 08:03:54.969174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.349 [2024-07-15 08:03:54.969180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.349 [2024-07-15 08:03:54.969195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.349 qpair failed and we were unable to recover it. 00:28:10.349 [2024-07-15 08:03:54.979114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.349 [2024-07-15 08:03:54.979198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.349 [2024-07-15 08:03:54.979213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.349 [2024-07-15 08:03:54.979220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.349 [2024-07-15 08:03:54.979230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.349 [2024-07-15 08:03:54.979245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.349 qpair failed and we were unable to recover it. 00:28:10.349 [2024-07-15 08:03:54.989128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.349 [2024-07-15 08:03:54.989200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.349 [2024-07-15 08:03:54.989215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.349 [2024-07-15 08:03:54.989222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.349 [2024-07-15 08:03:54.989232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.349 [2024-07-15 08:03:54.989247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.349 qpair failed and we were unable to recover it. 00:28:10.349 [2024-07-15 08:03:54.999161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.349 [2024-07-15 08:03:54.999220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.349 [2024-07-15 08:03:54.999239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.349 [2024-07-15 08:03:54.999246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.349 [2024-07-15 08:03:54.999253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.349 [2024-07-15 08:03:54.999268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.349 qpair failed and we were unable to recover it. 00:28:10.349 [2024-07-15 08:03:55.009184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.349 [2024-07-15 08:03:55.009247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.349 [2024-07-15 08:03:55.009262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.349 [2024-07-15 08:03:55.009269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.349 [2024-07-15 08:03:55.009275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.349 [2024-07-15 08:03:55.009290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.349 qpair failed and we were unable to recover it. 00:28:10.349 [2024-07-15 08:03:55.019179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.349 [2024-07-15 08:03:55.019249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.349 [2024-07-15 08:03:55.019264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.349 [2024-07-15 08:03:55.019271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.349 [2024-07-15 08:03:55.019278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.349 [2024-07-15 08:03:55.019292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.349 qpair failed and we were unable to recover it. 00:28:10.349 [2024-07-15 08:03:55.029257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.350 [2024-07-15 08:03:55.029313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.350 [2024-07-15 08:03:55.029327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.350 [2024-07-15 08:03:55.029335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.350 [2024-07-15 08:03:55.029341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.350 [2024-07-15 08:03:55.029356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.350 qpair failed and we were unable to recover it. 00:28:10.350 [2024-07-15 08:03:55.039270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.350 [2024-07-15 08:03:55.039326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.350 [2024-07-15 08:03:55.039344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.350 [2024-07-15 08:03:55.039352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.350 [2024-07-15 08:03:55.039358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.350 [2024-07-15 08:03:55.039373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.350 qpair failed and we were unable to recover it. 00:28:10.350 [2024-07-15 08:03:55.049296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.350 [2024-07-15 08:03:55.049353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.350 [2024-07-15 08:03:55.049368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.350 [2024-07-15 08:03:55.049375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.350 [2024-07-15 08:03:55.049381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.350 [2024-07-15 08:03:55.049395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.350 qpair failed and we were unable to recover it. 00:28:10.350 [2024-07-15 08:03:55.059323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.350 [2024-07-15 08:03:55.059376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.350 [2024-07-15 08:03:55.059390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.350 [2024-07-15 08:03:55.059397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.350 [2024-07-15 08:03:55.059404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.350 [2024-07-15 08:03:55.059418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.350 qpair failed and we were unable to recover it. 00:28:10.350 [2024-07-15 08:03:55.069341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.350 [2024-07-15 08:03:55.069395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.350 [2024-07-15 08:03:55.069410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.350 [2024-07-15 08:03:55.069417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.350 [2024-07-15 08:03:55.069423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.350 [2024-07-15 08:03:55.069438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.350 qpair failed and we were unable to recover it. 00:28:10.350 [2024-07-15 08:03:55.079396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.350 [2024-07-15 08:03:55.079454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.350 [2024-07-15 08:03:55.079468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.350 [2024-07-15 08:03:55.079476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.350 [2024-07-15 08:03:55.079482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.350 [2024-07-15 08:03:55.079499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.350 qpair failed and we were unable to recover it. 00:28:10.350 [2024-07-15 08:03:55.089418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.350 [2024-07-15 08:03:55.089476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.350 [2024-07-15 08:03:55.089490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.350 [2024-07-15 08:03:55.089498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.350 [2024-07-15 08:03:55.089504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.350 [2024-07-15 08:03:55.089518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.350 qpair failed and we were unable to recover it. 00:28:10.610 [2024-07-15 08:03:55.099439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.610 [2024-07-15 08:03:55.099491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.610 [2024-07-15 08:03:55.099507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.610 [2024-07-15 08:03:55.099514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.610 [2024-07-15 08:03:55.099521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.610 [2024-07-15 08:03:55.099536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.610 qpair failed and we were unable to recover it. 00:28:10.610 [2024-07-15 08:03:55.109460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.610 [2024-07-15 08:03:55.109514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.610 [2024-07-15 08:03:55.109528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.610 [2024-07-15 08:03:55.109536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.610 [2024-07-15 08:03:55.109542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.610 [2024-07-15 08:03:55.109557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.610 qpair failed and we were unable to recover it. 00:28:10.610 [2024-07-15 08:03:55.119498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.610 [2024-07-15 08:03:55.119557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.610 [2024-07-15 08:03:55.119572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.610 [2024-07-15 08:03:55.119579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.610 [2024-07-15 08:03:55.119585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.610 [2024-07-15 08:03:55.119600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.610 qpair failed and we were unable to recover it. 00:28:10.610 [2024-07-15 08:03:55.129533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.610 [2024-07-15 08:03:55.129589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.610 [2024-07-15 08:03:55.129606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.610 [2024-07-15 08:03:55.129614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.610 [2024-07-15 08:03:55.129620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.610 [2024-07-15 08:03:55.129635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.610 qpair failed and we were unable to recover it. 00:28:10.610 [2024-07-15 08:03:55.139619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.610 [2024-07-15 08:03:55.139722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.610 [2024-07-15 08:03:55.139739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.610 [2024-07-15 08:03:55.139746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.610 [2024-07-15 08:03:55.139752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.610 [2024-07-15 08:03:55.139767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.610 qpair failed and we were unable to recover it. 00:28:10.610 [2024-07-15 08:03:55.149578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.610 [2024-07-15 08:03:55.149633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.610 [2024-07-15 08:03:55.149648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.610 [2024-07-15 08:03:55.149656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.610 [2024-07-15 08:03:55.149662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.610 [2024-07-15 08:03:55.149677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.610 qpair failed and we were unable to recover it. 00:28:10.610 [2024-07-15 08:03:55.159622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.610 [2024-07-15 08:03:55.159682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.610 [2024-07-15 08:03:55.159696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.610 [2024-07-15 08:03:55.159704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.610 [2024-07-15 08:03:55.159710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.610 [2024-07-15 08:03:55.159724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.610 qpair failed and we were unable to recover it. 00:28:10.610 [2024-07-15 08:03:55.169634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.610 [2024-07-15 08:03:55.169692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.611 [2024-07-15 08:03:55.169707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.611 [2024-07-15 08:03:55.169714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.611 [2024-07-15 08:03:55.169723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.611 [2024-07-15 08:03:55.169739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.611 qpair failed and we were unable to recover it. 00:28:10.611 [2024-07-15 08:03:55.179672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.611 [2024-07-15 08:03:55.179736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.611 [2024-07-15 08:03:55.179751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.611 [2024-07-15 08:03:55.179759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.611 [2024-07-15 08:03:55.179765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.611 [2024-07-15 08:03:55.179779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.611 qpair failed and we were unable to recover it. 00:28:10.611 [2024-07-15 08:03:55.189699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.611 [2024-07-15 08:03:55.189776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.611 [2024-07-15 08:03:55.189791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.611 [2024-07-15 08:03:55.189799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.611 [2024-07-15 08:03:55.189805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.611 [2024-07-15 08:03:55.189819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.611 qpair failed and we were unable to recover it. 00:28:10.611 [2024-07-15 08:03:55.199707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.611 [2024-07-15 08:03:55.199771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.611 [2024-07-15 08:03:55.199787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.611 [2024-07-15 08:03:55.199794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.611 [2024-07-15 08:03:55.199800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.611 [2024-07-15 08:03:55.199815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.611 qpair failed and we were unable to recover it. 00:28:10.611 [2024-07-15 08:03:55.209705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.611 [2024-07-15 08:03:55.209765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.611 [2024-07-15 08:03:55.209779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.611 [2024-07-15 08:03:55.209787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.611 [2024-07-15 08:03:55.209793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.611 [2024-07-15 08:03:55.209808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.611 qpair failed and we were unable to recover it. 00:28:10.611 [2024-07-15 08:03:55.219767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.611 [2024-07-15 08:03:55.219828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.611 [2024-07-15 08:03:55.219842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.611 [2024-07-15 08:03:55.219849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.611 [2024-07-15 08:03:55.219855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.611 [2024-07-15 08:03:55.219870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.611 qpair failed and we were unable to recover it. 00:28:10.611 [2024-07-15 08:03:55.229824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.611 [2024-07-15 08:03:55.229881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.611 [2024-07-15 08:03:55.229895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.611 [2024-07-15 08:03:55.229902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.611 [2024-07-15 08:03:55.229908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.611 [2024-07-15 08:03:55.229922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.611 qpair failed and we were unable to recover it. 00:28:10.611 [2024-07-15 08:03:55.239854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.611 [2024-07-15 08:03:55.239915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.611 [2024-07-15 08:03:55.239929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.611 [2024-07-15 08:03:55.239937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.611 [2024-07-15 08:03:55.239943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.611 [2024-07-15 08:03:55.239957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.611 qpair failed and we were unable to recover it. 00:28:10.611 [2024-07-15 08:03:55.249876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.611 [2024-07-15 08:03:55.249932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.611 [2024-07-15 08:03:55.249947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.611 [2024-07-15 08:03:55.249954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.611 [2024-07-15 08:03:55.249960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.611 [2024-07-15 08:03:55.249975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.611 qpair failed and we were unable to recover it. 00:28:10.611 [2024-07-15 08:03:55.259903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.611 [2024-07-15 08:03:55.259962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.611 [2024-07-15 08:03:55.259977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.611 [2024-07-15 08:03:55.259984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.611 [2024-07-15 08:03:55.259993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.611 [2024-07-15 08:03:55.260008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.611 qpair failed and we were unable to recover it. 00:28:10.611 [2024-07-15 08:03:55.269922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.611 [2024-07-15 08:03:55.269996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.611 [2024-07-15 08:03:55.270012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.611 [2024-07-15 08:03:55.270019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.611 [2024-07-15 08:03:55.270026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.611 [2024-07-15 08:03:55.270040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.611 qpair failed and we were unable to recover it. 00:28:10.611 [2024-07-15 08:03:55.279972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.611 [2024-07-15 08:03:55.280054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.611 [2024-07-15 08:03:55.280069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.611 [2024-07-15 08:03:55.280077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.611 [2024-07-15 08:03:55.280082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.611 [2024-07-15 08:03:55.280097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.611 qpair failed and we were unable to recover it. 00:28:10.611 [2024-07-15 08:03:55.290013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.611 [2024-07-15 08:03:55.290073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.611 [2024-07-15 08:03:55.290088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.611 [2024-07-15 08:03:55.290096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.611 [2024-07-15 08:03:55.290102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.611 [2024-07-15 08:03:55.290117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.611 qpair failed and we were unable to recover it. 00:28:10.611 [2024-07-15 08:03:55.300031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.611 [2024-07-15 08:03:55.300086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.611 [2024-07-15 08:03:55.300101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.611 [2024-07-15 08:03:55.300109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.611 [2024-07-15 08:03:55.300115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.611 [2024-07-15 08:03:55.300130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.611 qpair failed and we were unable to recover it. 00:28:10.611 [2024-07-15 08:03:55.310057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.611 [2024-07-15 08:03:55.310110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.611 [2024-07-15 08:03:55.310126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.611 [2024-07-15 08:03:55.310133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.612 [2024-07-15 08:03:55.310140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.612 [2024-07-15 08:03:55.310155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.612 qpair failed and we were unable to recover it. 00:28:10.612 [2024-07-15 08:03:55.320090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.612 [2024-07-15 08:03:55.320146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.612 [2024-07-15 08:03:55.320160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.612 [2024-07-15 08:03:55.320167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.612 [2024-07-15 08:03:55.320174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.612 [2024-07-15 08:03:55.320188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.612 qpair failed and we were unable to recover it. 00:28:10.612 [2024-07-15 08:03:55.330092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.612 [2024-07-15 08:03:55.330152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.612 [2024-07-15 08:03:55.330167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.612 [2024-07-15 08:03:55.330174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.612 [2024-07-15 08:03:55.330181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.612 [2024-07-15 08:03:55.330195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.612 qpair failed and we were unable to recover it. 00:28:10.612 [2024-07-15 08:03:55.340143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.612 [2024-07-15 08:03:55.340198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.612 [2024-07-15 08:03:55.340212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.612 [2024-07-15 08:03:55.340220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.612 [2024-07-15 08:03:55.340230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.612 [2024-07-15 08:03:55.340245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.612 qpair failed and we were unable to recover it. 00:28:10.612 [2024-07-15 08:03:55.350153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.612 [2024-07-15 08:03:55.350232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.612 [2024-07-15 08:03:55.350247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.612 [2024-07-15 08:03:55.350258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.612 [2024-07-15 08:03:55.350264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.612 [2024-07-15 08:03:55.350279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.612 qpair failed and we were unable to recover it. 00:28:10.612 [2024-07-15 08:03:55.360198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.612 [2024-07-15 08:03:55.360254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.612 [2024-07-15 08:03:55.360269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.612 [2024-07-15 08:03:55.360276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.612 [2024-07-15 08:03:55.360282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.612 [2024-07-15 08:03:55.360297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.612 qpair failed and we were unable to recover it. 00:28:10.871 [2024-07-15 08:03:55.370264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.871 [2024-07-15 08:03:55.370322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.871 [2024-07-15 08:03:55.370337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.871 [2024-07-15 08:03:55.370345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.871 [2024-07-15 08:03:55.370351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.871 [2024-07-15 08:03:55.370365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.871 qpair failed and we were unable to recover it. 00:28:10.871 [2024-07-15 08:03:55.380278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.871 [2024-07-15 08:03:55.380383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.871 [2024-07-15 08:03:55.380399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.871 [2024-07-15 08:03:55.380407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.871 [2024-07-15 08:03:55.380413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.871 [2024-07-15 08:03:55.380428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.871 qpair failed and we were unable to recover it. 00:28:10.871 [2024-07-15 08:03:55.390334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.871 [2024-07-15 08:03:55.390390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.871 [2024-07-15 08:03:55.390405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.871 [2024-07-15 08:03:55.390412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.871 [2024-07-15 08:03:55.390418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.871 [2024-07-15 08:03:55.390432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.871 qpair failed and we were unable to recover it. 00:28:10.871 [2024-07-15 08:03:55.400323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.872 [2024-07-15 08:03:55.400378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.872 [2024-07-15 08:03:55.400393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.872 [2024-07-15 08:03:55.400400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.872 [2024-07-15 08:03:55.400407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.872 [2024-07-15 08:03:55.400421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.872 qpair failed and we were unable to recover it. 00:28:10.872 [2024-07-15 08:03:55.410345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.872 [2024-07-15 08:03:55.410404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.872 [2024-07-15 08:03:55.410419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.872 [2024-07-15 08:03:55.410426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.872 [2024-07-15 08:03:55.410432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.872 [2024-07-15 08:03:55.410447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.872 qpair failed and we were unable to recover it. 00:28:10.872 [2024-07-15 08:03:55.420376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.872 [2024-07-15 08:03:55.420429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.872 [2024-07-15 08:03:55.420443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.872 [2024-07-15 08:03:55.420451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.872 [2024-07-15 08:03:55.420457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.872 [2024-07-15 08:03:55.420472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.872 qpair failed and we were unable to recover it. 00:28:10.872 [2024-07-15 08:03:55.430408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.872 [2024-07-15 08:03:55.430465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.872 [2024-07-15 08:03:55.430479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.872 [2024-07-15 08:03:55.430486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.872 [2024-07-15 08:03:55.430493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.872 [2024-07-15 08:03:55.430507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.872 qpair failed and we were unable to recover it. 00:28:10.872 [2024-07-15 08:03:55.440454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.872 [2024-07-15 08:03:55.440511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.872 [2024-07-15 08:03:55.440529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.872 [2024-07-15 08:03:55.440536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.872 [2024-07-15 08:03:55.440542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.872 [2024-07-15 08:03:55.440556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.872 qpair failed and we were unable to recover it. 00:28:10.872 [2024-07-15 08:03:55.450462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.872 [2024-07-15 08:03:55.450541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.872 [2024-07-15 08:03:55.450556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.872 [2024-07-15 08:03:55.450563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.872 [2024-07-15 08:03:55.450569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.872 [2024-07-15 08:03:55.450584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.872 qpair failed and we were unable to recover it. 00:28:10.872 [2024-07-15 08:03:55.460490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.872 [2024-07-15 08:03:55.460548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.872 [2024-07-15 08:03:55.460563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.872 [2024-07-15 08:03:55.460570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.872 [2024-07-15 08:03:55.460576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.872 [2024-07-15 08:03:55.460590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.872 qpair failed and we were unable to recover it. 00:28:10.872 [2024-07-15 08:03:55.470520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.872 [2024-07-15 08:03:55.470578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.872 [2024-07-15 08:03:55.470593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.872 [2024-07-15 08:03:55.470600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.872 [2024-07-15 08:03:55.470606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.872 [2024-07-15 08:03:55.470621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.872 qpair failed and we were unable to recover it. 00:28:10.872 [2024-07-15 08:03:55.480532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.872 [2024-07-15 08:03:55.480590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.872 [2024-07-15 08:03:55.480605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.872 [2024-07-15 08:03:55.480612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.872 [2024-07-15 08:03:55.480618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.872 [2024-07-15 08:03:55.480640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.872 qpair failed and we were unable to recover it. 00:28:10.872 [2024-07-15 08:03:55.490576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.872 [2024-07-15 08:03:55.490632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.872 [2024-07-15 08:03:55.490646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.872 [2024-07-15 08:03:55.490653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.872 [2024-07-15 08:03:55.490660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.872 [2024-07-15 08:03:55.490674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.872 qpair failed and we were unable to recover it. 00:28:10.872 [2024-07-15 08:03:55.500600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.872 [2024-07-15 08:03:55.500656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.872 [2024-07-15 08:03:55.500671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.872 [2024-07-15 08:03:55.500678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.872 [2024-07-15 08:03:55.500685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.872 [2024-07-15 08:03:55.500700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.872 qpair failed and we were unable to recover it. 00:28:10.872 [2024-07-15 08:03:55.510629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.872 [2024-07-15 08:03:55.510683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.872 [2024-07-15 08:03:55.510698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.872 [2024-07-15 08:03:55.510706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.872 [2024-07-15 08:03:55.510712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.872 [2024-07-15 08:03:55.510726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.872 qpair failed and we were unable to recover it. 00:28:10.872 [2024-07-15 08:03:55.520704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.872 [2024-07-15 08:03:55.520763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.872 [2024-07-15 08:03:55.520777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.872 [2024-07-15 08:03:55.520785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.872 [2024-07-15 08:03:55.520791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.872 [2024-07-15 08:03:55.520805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.872 qpair failed and we were unable to recover it. 00:28:10.872 [2024-07-15 08:03:55.530674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.872 [2024-07-15 08:03:55.530737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.872 [2024-07-15 08:03:55.530754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.872 [2024-07-15 08:03:55.530761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.872 [2024-07-15 08:03:55.530768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.872 [2024-07-15 08:03:55.530782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.872 qpair failed and we were unable to recover it. 00:28:10.872 [2024-07-15 08:03:55.540721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.873 [2024-07-15 08:03:55.540774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.873 [2024-07-15 08:03:55.540788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.873 [2024-07-15 08:03:55.540795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.873 [2024-07-15 08:03:55.540801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.873 [2024-07-15 08:03:55.540815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.873 qpair failed and we were unable to recover it. 00:28:10.873 [2024-07-15 08:03:55.550799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.873 [2024-07-15 08:03:55.550855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.873 [2024-07-15 08:03:55.550870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.873 [2024-07-15 08:03:55.550877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.873 [2024-07-15 08:03:55.550883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.873 [2024-07-15 08:03:55.550897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.873 qpair failed and we were unable to recover it. 00:28:10.873 [2024-07-15 08:03:55.560787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.873 [2024-07-15 08:03:55.560848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.873 [2024-07-15 08:03:55.560862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.873 [2024-07-15 08:03:55.560870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.873 [2024-07-15 08:03:55.560876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.873 [2024-07-15 08:03:55.560891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.873 qpair failed and we were unable to recover it. 00:28:10.873 [2024-07-15 08:03:55.570749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.873 [2024-07-15 08:03:55.570858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.873 [2024-07-15 08:03:55.570874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.873 [2024-07-15 08:03:55.570881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.873 [2024-07-15 08:03:55.570888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.873 [2024-07-15 08:03:55.570906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.873 qpair failed and we were unable to recover it. 00:28:10.873 [2024-07-15 08:03:55.580836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.873 [2024-07-15 08:03:55.580894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.873 [2024-07-15 08:03:55.580909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.873 [2024-07-15 08:03:55.580916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.873 [2024-07-15 08:03:55.580922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.873 [2024-07-15 08:03:55.580937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.873 qpair failed and we were unable to recover it. 00:28:10.873 [2024-07-15 08:03:55.590784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.873 [2024-07-15 08:03:55.590841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.873 [2024-07-15 08:03:55.590856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.873 [2024-07-15 08:03:55.590863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.873 [2024-07-15 08:03:55.590869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.873 [2024-07-15 08:03:55.590883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.873 qpair failed and we were unable to recover it. 00:28:10.873 [2024-07-15 08:03:55.600900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.873 [2024-07-15 08:03:55.600959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.873 [2024-07-15 08:03:55.600974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.873 [2024-07-15 08:03:55.600982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.873 [2024-07-15 08:03:55.600988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.873 [2024-07-15 08:03:55.601002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.873 qpair failed and we were unable to recover it. 00:28:10.873 [2024-07-15 08:03:55.610948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.873 [2024-07-15 08:03:55.611007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.873 [2024-07-15 08:03:55.611022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.873 [2024-07-15 08:03:55.611029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.873 [2024-07-15 08:03:55.611035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.873 [2024-07-15 08:03:55.611049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.873 qpair failed and we were unable to recover it. 00:28:10.873 [2024-07-15 08:03:55.620898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.873 [2024-07-15 08:03:55.620959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.873 [2024-07-15 08:03:55.620974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.873 [2024-07-15 08:03:55.620982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.873 [2024-07-15 08:03:55.620988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:10.873 [2024-07-15 08:03:55.621002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.873 qpair failed and we were unable to recover it. 00:28:11.132 [2024-07-15 08:03:55.630989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.132 [2024-07-15 08:03:55.631057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.132 [2024-07-15 08:03:55.631073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.132 [2024-07-15 08:03:55.631080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.132 [2024-07-15 08:03:55.631086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.132 [2024-07-15 08:03:55.631101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.132 qpair failed and we were unable to recover it. 00:28:11.132 [2024-07-15 08:03:55.640946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.132 [2024-07-15 08:03:55.641013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.132 [2024-07-15 08:03:55.641028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.132 [2024-07-15 08:03:55.641035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.132 [2024-07-15 08:03:55.641041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.132 [2024-07-15 08:03:55.641056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.132 qpair failed and we were unable to recover it. 00:28:11.132 [2024-07-15 08:03:55.651033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.133 [2024-07-15 08:03:55.651086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.133 [2024-07-15 08:03:55.651101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.133 [2024-07-15 08:03:55.651108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.133 [2024-07-15 08:03:55.651115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.133 [2024-07-15 08:03:55.651129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.133 qpair failed and we were unable to recover it. 00:28:11.133 [2024-07-15 08:03:55.660999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.133 [2024-07-15 08:03:55.661062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.133 [2024-07-15 08:03:55.661078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.133 [2024-07-15 08:03:55.661085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.133 [2024-07-15 08:03:55.661095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.133 [2024-07-15 08:03:55.661109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.133 qpair failed and we were unable to recover it. 00:28:11.133 [2024-07-15 08:03:55.671144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.133 [2024-07-15 08:03:55.671198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.133 [2024-07-15 08:03:55.671213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.133 [2024-07-15 08:03:55.671220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.133 [2024-07-15 08:03:55.671231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.133 [2024-07-15 08:03:55.671246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.133 qpair failed and we were unable to recover it. 00:28:11.133 [2024-07-15 08:03:55.681124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.133 [2024-07-15 08:03:55.681184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.133 [2024-07-15 08:03:55.681199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.133 [2024-07-15 08:03:55.681206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.133 [2024-07-15 08:03:55.681212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.133 [2024-07-15 08:03:55.681231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.133 qpair failed and we were unable to recover it. 00:28:11.133 [2024-07-15 08:03:55.691186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.133 [2024-07-15 08:03:55.691245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.133 [2024-07-15 08:03:55.691259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.133 [2024-07-15 08:03:55.691266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.133 [2024-07-15 08:03:55.691272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.133 [2024-07-15 08:03:55.691286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.133 qpair failed and we were unable to recover it. 00:28:11.133 [2024-07-15 08:03:55.701180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.133 [2024-07-15 08:03:55.701241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.133 [2024-07-15 08:03:55.701257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.133 [2024-07-15 08:03:55.701263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.133 [2024-07-15 08:03:55.701269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.133 [2024-07-15 08:03:55.701283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.133 qpair failed and we were unable to recover it. 00:28:11.133 [2024-07-15 08:03:55.711232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.133 [2024-07-15 08:03:55.711284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.133 [2024-07-15 08:03:55.711299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.133 [2024-07-15 08:03:55.711306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.133 [2024-07-15 08:03:55.711312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.133 [2024-07-15 08:03:55.711326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.133 qpair failed and we were unable to recover it. 00:28:11.133 [2024-07-15 08:03:55.721244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.133 [2024-07-15 08:03:55.721299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.133 [2024-07-15 08:03:55.721315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.133 [2024-07-15 08:03:55.721322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.133 [2024-07-15 08:03:55.721328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.133 [2024-07-15 08:03:55.721342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.133 qpair failed and we were unable to recover it. 00:28:11.133 [2024-07-15 08:03:55.731328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.133 [2024-07-15 08:03:55.731430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.133 [2024-07-15 08:03:55.731444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.133 [2024-07-15 08:03:55.731451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.133 [2024-07-15 08:03:55.731457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.133 [2024-07-15 08:03:55.731471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.133 qpair failed and we were unable to recover it. 00:28:11.133 [2024-07-15 08:03:55.741240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.133 [2024-07-15 08:03:55.741295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.133 [2024-07-15 08:03:55.741309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.133 [2024-07-15 08:03:55.741317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.133 [2024-07-15 08:03:55.741324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.133 [2024-07-15 08:03:55.741338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.133 qpair failed and we were unable to recover it. 00:28:11.133 [2024-07-15 08:03:55.751343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.133 [2024-07-15 08:03:55.751402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.133 [2024-07-15 08:03:55.751416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.133 [2024-07-15 08:03:55.751427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.133 [2024-07-15 08:03:55.751434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.133 [2024-07-15 08:03:55.751448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.133 qpair failed and we were unable to recover it. 00:28:11.133 [2024-07-15 08:03:55.761366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.133 [2024-07-15 08:03:55.761423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.133 [2024-07-15 08:03:55.761438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.133 [2024-07-15 08:03:55.761445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.133 [2024-07-15 08:03:55.761452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.133 [2024-07-15 08:03:55.761466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.133 qpair failed and we were unable to recover it. 00:28:11.133 [2024-07-15 08:03:55.771471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.133 [2024-07-15 08:03:55.771552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.133 [2024-07-15 08:03:55.771567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.133 [2024-07-15 08:03:55.771574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.133 [2024-07-15 08:03:55.771580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.133 [2024-07-15 08:03:55.771595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.133 qpair failed and we were unable to recover it. 00:28:11.133 [2024-07-15 08:03:55.781485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.133 [2024-07-15 08:03:55.781584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.133 [2024-07-15 08:03:55.781599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.133 [2024-07-15 08:03:55.781606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.133 [2024-07-15 08:03:55.781612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.133 [2024-07-15 08:03:55.781626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.133 qpair failed and we were unable to recover it. 00:28:11.133 [2024-07-15 08:03:55.791464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.133 [2024-07-15 08:03:55.791519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.134 [2024-07-15 08:03:55.791534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.134 [2024-07-15 08:03:55.791542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.134 [2024-07-15 08:03:55.791549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.134 [2024-07-15 08:03:55.791564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.134 qpair failed and we were unable to recover it. 00:28:11.134 [2024-07-15 08:03:55.801475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.134 [2024-07-15 08:03:55.801531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.134 [2024-07-15 08:03:55.801546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.134 [2024-07-15 08:03:55.801553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.134 [2024-07-15 08:03:55.801560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.134 [2024-07-15 08:03:55.801574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.134 qpair failed and we were unable to recover it. 00:28:11.134 [2024-07-15 08:03:55.811444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.134 [2024-07-15 08:03:55.811500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.134 [2024-07-15 08:03:55.811515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.134 [2024-07-15 08:03:55.811522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.134 [2024-07-15 08:03:55.811528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.134 [2024-07-15 08:03:55.811542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.134 qpair failed and we were unable to recover it. 00:28:11.134 [2024-07-15 08:03:55.821516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.134 [2024-07-15 08:03:55.821572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.134 [2024-07-15 08:03:55.821587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.134 [2024-07-15 08:03:55.821594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.134 [2024-07-15 08:03:55.821601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.134 [2024-07-15 08:03:55.821615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.134 qpair failed and we were unable to recover it. 00:28:11.134 [2024-07-15 08:03:55.831593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.134 [2024-07-15 08:03:55.831644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.134 [2024-07-15 08:03:55.831659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.134 [2024-07-15 08:03:55.831667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.134 [2024-07-15 08:03:55.831673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.134 [2024-07-15 08:03:55.831688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.134 qpair failed and we were unable to recover it. 00:28:11.134 [2024-07-15 08:03:55.841593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.134 [2024-07-15 08:03:55.841698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.134 [2024-07-15 08:03:55.841716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.134 [2024-07-15 08:03:55.841723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.134 [2024-07-15 08:03:55.841729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.134 [2024-07-15 08:03:55.841743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.134 qpair failed and we were unable to recover it. 00:28:11.134 [2024-07-15 08:03:55.851604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.134 [2024-07-15 08:03:55.851673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.134 [2024-07-15 08:03:55.851688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.134 [2024-07-15 08:03:55.851695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.134 [2024-07-15 08:03:55.851701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.134 [2024-07-15 08:03:55.851716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.134 qpair failed and we were unable to recover it. 00:28:11.134 [2024-07-15 08:03:55.861643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.134 [2024-07-15 08:03:55.861719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.134 [2024-07-15 08:03:55.861734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.134 [2024-07-15 08:03:55.861741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.134 [2024-07-15 08:03:55.861747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.134 [2024-07-15 08:03:55.861761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.134 qpair failed and we were unable to recover it. 00:28:11.134 [2024-07-15 08:03:55.871638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.134 [2024-07-15 08:03:55.871710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.134 [2024-07-15 08:03:55.871725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.134 [2024-07-15 08:03:55.871733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.134 [2024-07-15 08:03:55.871738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.134 [2024-07-15 08:03:55.871754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.134 qpair failed and we were unable to recover it. 00:28:11.134 [2024-07-15 08:03:55.881715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.134 [2024-07-15 08:03:55.881770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.134 [2024-07-15 08:03:55.881785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.134 [2024-07-15 08:03:55.881791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.134 [2024-07-15 08:03:55.881797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.134 [2024-07-15 08:03:55.881812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.134 qpair failed and we were unable to recover it. 00:28:11.392 [2024-07-15 08:03:55.891735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.392 [2024-07-15 08:03:55.891806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.392 [2024-07-15 08:03:55.891821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.392 [2024-07-15 08:03:55.891827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.392 [2024-07-15 08:03:55.891834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.392 [2024-07-15 08:03:55.891848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.392 qpair failed and we were unable to recover it. 00:28:11.392 [2024-07-15 08:03:55.901789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.392 [2024-07-15 08:03:55.901848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.392 [2024-07-15 08:03:55.901863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.392 [2024-07-15 08:03:55.901870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.392 [2024-07-15 08:03:55.901876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.392 [2024-07-15 08:03:55.901891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.392 qpair failed and we were unable to recover it. 00:28:11.392 [2024-07-15 08:03:55.911802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.392 [2024-07-15 08:03:55.911862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.392 [2024-07-15 08:03:55.911877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.392 [2024-07-15 08:03:55.911885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.392 [2024-07-15 08:03:55.911891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.392 [2024-07-15 08:03:55.911905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.392 qpair failed and we were unable to recover it. 00:28:11.392 [2024-07-15 08:03:55.921868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.392 [2024-07-15 08:03:55.921925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.392 [2024-07-15 08:03:55.921940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.392 [2024-07-15 08:03:55.921949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.392 [2024-07-15 08:03:55.921957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.392 [2024-07-15 08:03:55.921971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.392 qpair failed and we were unable to recover it. 00:28:11.392 [2024-07-15 08:03:55.931811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.392 [2024-07-15 08:03:55.931881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.392 [2024-07-15 08:03:55.931899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.392 [2024-07-15 08:03:55.931907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.392 [2024-07-15 08:03:55.931912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.392 [2024-07-15 08:03:55.931927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.392 qpair failed and we were unable to recover it. 00:28:11.392 [2024-07-15 08:03:55.941864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.392 [2024-07-15 08:03:55.941923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.392 [2024-07-15 08:03:55.941938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.392 [2024-07-15 08:03:55.941946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.392 [2024-07-15 08:03:55.941952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.392 [2024-07-15 08:03:55.941966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.392 qpair failed and we were unable to recover it. 00:28:11.392 [2024-07-15 08:03:55.951916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.392 [2024-07-15 08:03:55.951974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.392 [2024-07-15 08:03:55.951989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.392 [2024-07-15 08:03:55.951997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.392 [2024-07-15 08:03:55.952002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.392 [2024-07-15 08:03:55.952017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.392 qpair failed and we were unable to recover it. 00:28:11.393 [2024-07-15 08:03:55.961934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.393 [2024-07-15 08:03:55.962005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.393 [2024-07-15 08:03:55.962020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.393 [2024-07-15 08:03:55.962028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.393 [2024-07-15 08:03:55.962034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.393 [2024-07-15 08:03:55.962048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.393 qpair failed and we were unable to recover it. 00:28:11.393 [2024-07-15 08:03:55.971969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.393 [2024-07-15 08:03:55.972025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.393 [2024-07-15 08:03:55.972040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.393 [2024-07-15 08:03:55.972047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.393 [2024-07-15 08:03:55.972053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e30000b90 00:28:11.393 [2024-07-15 08:03:55.972071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:11.393 qpair failed and we were unable to recover it. 00:28:11.393 [2024-07-15 08:03:55.982242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.393 [2024-07-15 08:03:55.982362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.393 [2024-07-15 08:03:55.982418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.393 [2024-07-15 08:03:55.982443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.393 [2024-07-15 08:03:55.982463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e38000b90 00:28:11.393 [2024-07-15 08:03:55.982511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:11.393 qpair failed and we were unable to recover it. 00:28:11.393 [2024-07-15 08:03:55.992031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.393 [2024-07-15 08:03:55.992120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.393 [2024-07-15 08:03:55.992149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.393 [2024-07-15 08:03:55.992163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.393 [2024-07-15 08:03:55.992176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e38000b90 00:28:11.393 [2024-07-15 08:03:55.992205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:11.393 qpair failed and we were unable to recover it. 00:28:11.393 [2024-07-15 08:03:55.992317] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:28:11.393 A controller has encountered a failure and is being reset. 00:28:11.393 [2024-07-15 08:03:56.002074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.393 [2024-07-15 08:03:56.002187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.393 [2024-07-15 08:03:56.002252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.393 [2024-07-15 08:03:56.002278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.393 [2024-07-15 08:03:56.002298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdbaed0 00:28:11.393 [2024-07-15 08:03:56.002346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.393 qpair failed and we were unable to recover it. 00:28:11.393 [2024-07-15 08:03:56.012112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:11.393 [2024-07-15 08:03:56.012197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:11.393 [2024-07-15 08:03:56.012235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:11.393 [2024-07-15 08:03:56.012251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:11.393 [2024-07-15 08:03:56.012263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdbaed0 00:28:11.393 [2024-07-15 08:03:56.012291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:11.393 qpair failed and we were unable to recover it. 00:28:11.393 Controller properly reset. 00:28:11.393 Initializing NVMe Controllers 00:28:11.393 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:11.393 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:11.393 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:11.393 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:11.393 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:11.393 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:11.393 Initialization complete. Launching workers. 00:28:11.393 Starting thread on core 1 00:28:11.393 Starting thread on core 2 00:28:11.393 Starting thread on core 3 00:28:11.393 Starting thread on core 0 00:28:11.393 08:03:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:11.393 00:28:11.393 real 0m11.375s 00:28:11.393 user 0m21.536s 00:28:11.393 sys 0m4.619s 00:28:11.393 08:03:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:11.393 08:03:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:11.393 ************************************ 00:28:11.393 END TEST nvmf_target_disconnect_tc2 00:28:11.393 ************************************ 00:28:11.393 08:03:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:28:11.393 08:03:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:11.393 08:03:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:11.393 08:03:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:11.393 08:03:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:11.393 08:03:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:28:11.393 08:03:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:11.393 08:03:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:28:11.393 08:03:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:11.393 08:03:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:11.393 rmmod nvme_tcp 00:28:11.393 rmmod nvme_fabrics 00:28:11.651 rmmod nvme_keyring 00:28:11.651 08:03:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:11.652 08:03:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:28:11.652 08:03:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:28:11.652 08:03:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3414955 ']' 00:28:11.652 08:03:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3414955 00:28:11.652 08:03:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3414955 ']' 00:28:11.652 08:03:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 3414955 00:28:11.652 08:03:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:28:11.652 08:03:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:11.652 08:03:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3414955 00:28:11.652 08:03:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:28:11.652 08:03:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:28:11.652 08:03:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3414955' 00:28:11.652 killing process with pid 3414955 00:28:11.652 08:03:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 3414955 00:28:11.652 08:03:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 3414955 00:28:11.909 08:03:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:11.910 08:03:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:11.910 08:03:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:11.910 08:03:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:11.910 08:03:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:11.910 08:03:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.910 08:03:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:11.910 08:03:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.811 08:03:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:13.811 00:28:13.811 real 0m19.838s 00:28:13.811 user 0m49.027s 00:28:13.811 sys 0m9.286s 00:28:13.811 08:03:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:13.811 08:03:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:13.811 ************************************ 00:28:13.811 END TEST nvmf_target_disconnect 00:28:13.811 ************************************ 00:28:13.811 08:03:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:13.811 08:03:58 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:28:13.811 08:03:58 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:13.811 08:03:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:14.070 08:03:58 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:28:14.070 00:28:14.070 real 21m26.499s 00:28:14.070 user 45m34.874s 00:28:14.070 sys 6m44.247s 00:28:14.070 08:03:58 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:14.070 08:03:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:14.070 ************************************ 00:28:14.070 END TEST nvmf_tcp 00:28:14.070 ************************************ 00:28:14.070 08:03:58 -- common/autotest_common.sh@1142 -- # return 0 00:28:14.070 08:03:58 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:28:14.070 08:03:58 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:14.070 08:03:58 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:14.070 08:03:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:14.070 08:03:58 -- common/autotest_common.sh@10 -- # set +x 00:28:14.070 ************************************ 00:28:14.070 START TEST spdkcli_nvmf_tcp 00:28:14.070 ************************************ 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:14.070 * Looking for test storage... 00:28:14.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:14.070 08:03:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:28:14.071 08:03:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3416486 00:28:14.071 08:03:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3416486 00:28:14.071 08:03:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 3416486 ']' 00:28:14.071 08:03:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:28:14.071 08:03:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.071 08:03:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:14.071 08:03:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.071 08:03:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:14.071 08:03:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:14.071 [2024-07-15 08:03:58.802953] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:28:14.071 [2024-07-15 08:03:58.803000] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3416486 ] 00:28:14.329 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.329 [2024-07-15 08:03:58.869134] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:14.329 [2024-07-15 08:03:58.951366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.329 [2024-07-15 08:03:58.951367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.896 08:03:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:14.896 08:03:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:28:14.896 08:03:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:28:14.896 08:03:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:14.896 08:03:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:14.896 08:03:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:28:14.896 08:03:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:28:15.154 08:03:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:28:15.154 08:03:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:15.154 08:03:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:15.154 08:03:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:15.154 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:15.154 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:28:15.154 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:28:15.154 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:28:15.154 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:28:15.154 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:28:15.154 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:15.154 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:28:15.154 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:28:15.154 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:15.154 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:15.154 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:28:15.154 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:15.154 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:15.154 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:28:15.154 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:15.154 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:15.154 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:15.154 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:15.154 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:28:15.154 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:28:15.154 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:15.154 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:28:15.154 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:15.155 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:28:15.155 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:28:15.155 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:28:15.155 ' 00:28:17.686 [2024-07-15 08:04:02.223326] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.083 [2024-07-15 08:04:03.507610] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:28:21.612 [2024-07-15 08:04:05.891006] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:28:23.515 [2024-07-15 08:04:07.945482] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:28:24.893 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:28:24.893 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:28:24.893 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:28:24.893 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:28:24.893 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:28:24.893 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:28:24.893 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:28:24.893 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:24.893 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:28:24.893 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:28:24.893 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:24.893 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:24.893 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:28:24.893 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:24.893 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:24.893 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:28:24.893 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:24.893 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:24.893 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:24.893 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:24.893 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:28:24.893 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:28:24.893 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:24.893 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:28:24.893 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:24.893 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:28:24.893 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:28:24.893 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:28:24.893 08:04:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:28:24.893 08:04:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:24.893 08:04:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:25.152 08:04:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:28:25.152 08:04:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:25.152 08:04:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:25.152 08:04:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:28:25.152 08:04:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:28:25.411 08:04:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:28:25.411 08:04:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:28:25.411 08:04:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:28:25.412 08:04:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:25.412 08:04:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:25.412 08:04:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:28:25.412 08:04:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:25.412 08:04:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:25.412 08:04:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:28:25.412 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:28:25.412 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:25.412 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:28:25.412 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:28:25.412 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:28:25.412 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:28:25.412 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:25.412 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:28:25.412 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:28:25.412 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:28:25.412 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:28:25.412 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:28:25.412 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:28:25.412 ' 00:28:30.686 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:28:30.686 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:28:30.686 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:30.686 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:28:30.686 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:28:30.686 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:28:30.686 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:28:30.686 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:30.686 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:28:30.686 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:28:30.686 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:28:30.686 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:28:30.686 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:28:30.686 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:28:30.945 08:04:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:28:30.945 08:04:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:30.945 08:04:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:30.945 08:04:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3416486 00:28:30.945 08:04:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3416486 ']' 00:28:30.945 08:04:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3416486 00:28:30.945 08:04:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:28:30.945 08:04:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:30.945 08:04:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3416486 00:28:30.945 08:04:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:30.945 08:04:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:30.945 08:04:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3416486' 00:28:30.945 killing process with pid 3416486 00:28:30.945 08:04:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 3416486 00:28:30.945 08:04:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 3416486 00:28:31.205 08:04:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:28:31.205 08:04:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:28:31.205 08:04:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3416486 ']' 00:28:31.205 08:04:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3416486 00:28:31.205 08:04:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3416486 ']' 00:28:31.205 08:04:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3416486 00:28:31.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3416486) - No such process 00:28:31.205 08:04:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 3416486 is not found' 00:28:31.205 Process with pid 3416486 is not found 00:28:31.205 08:04:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:28:31.205 08:04:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:28:31.205 08:04:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:28:31.205 00:28:31.205 real 0m17.113s 00:28:31.205 user 0m37.250s 00:28:31.205 sys 0m0.895s 00:28:31.205 08:04:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:31.205 08:04:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:31.205 ************************************ 00:28:31.205 END TEST spdkcli_nvmf_tcp 00:28:31.205 ************************************ 00:28:31.205 08:04:15 -- common/autotest_common.sh@1142 -- # return 0 00:28:31.205 08:04:15 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:31.205 08:04:15 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:31.205 08:04:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:31.205 08:04:15 -- common/autotest_common.sh@10 -- # set +x 00:28:31.205 ************************************ 00:28:31.205 START TEST nvmf_identify_passthru 00:28:31.205 ************************************ 00:28:31.205 08:04:15 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:31.205 * Looking for test storage... 00:28:31.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:31.205 08:04:15 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:31.205 08:04:15 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:31.205 08:04:15 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:31.205 08:04:15 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:31.205 08:04:15 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.205 08:04:15 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.205 08:04:15 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.205 08:04:15 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:28:31.205 08:04:15 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:31.205 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:31.205 08:04:15 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:31.205 08:04:15 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:31.205 08:04:15 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:31.205 08:04:15 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:31.205 08:04:15 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.205 08:04:15 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.206 08:04:15 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.206 08:04:15 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:28:31.206 08:04:15 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.206 08:04:15 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:28:31.206 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:31.206 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:31.206 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:31.206 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:31.206 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:31.206 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.206 08:04:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:31.206 08:04:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.206 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:31.206 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:31.206 08:04:15 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:28:31.206 08:04:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:37.814 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:37.814 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:37.814 Found net devices under 0000:86:00.0: cvl_0_0 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:37.814 Found net devices under 0000:86:00.1: cvl_0_1 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:37.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:37.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:28:37.814 00:28:37.814 --- 10.0.0.2 ping statistics --- 00:28:37.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.814 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:37.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:37.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:28:37.814 00:28:37.814 --- 10.0.0.1 ping statistics --- 00:28:37.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.814 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:37.814 08:04:21 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:37.814 08:04:21 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:28:37.814 08:04:21 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:37.814 08:04:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:37.814 08:04:21 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:28:37.814 08:04:21 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:28:37.815 08:04:21 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:28:37.815 08:04:21 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:28:37.815 08:04:21 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:28:37.815 08:04:21 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:28:37.815 08:04:21 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:28:37.815 08:04:21 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:37.815 08:04:21 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:37.815 08:04:21 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:28:37.815 08:04:21 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:28:37.815 08:04:21 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:28:37.815 08:04:21 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:28:37.815 08:04:21 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:28:37.815 08:04:21 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:28:37.815 08:04:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:28:37.815 08:04:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:28:37.815 08:04:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:28:37.815 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.057 08:04:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:28:42.057 08:04:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:28:42.057 08:04:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:28:42.057 08:04:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:28:42.057 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.347 08:04:30 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:28:45.347 08:04:30 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:28:45.347 08:04:30 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:45.347 08:04:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:45.347 08:04:30 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:28:45.347 08:04:30 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:45.347 08:04:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:45.347 08:04:30 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3423736 00:28:45.347 08:04:30 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:45.347 08:04:30 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:45.347 08:04:30 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3423736 00:28:45.347 08:04:30 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 3423736 ']' 00:28:45.347 08:04:30 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.347 08:04:30 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:45.347 08:04:30 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.347 08:04:30 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:45.347 08:04:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:45.606 [2024-07-15 08:04:30.132341] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:28:45.606 [2024-07-15 08:04:30.132386] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.606 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.606 [2024-07-15 08:04:30.202050] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:45.606 [2024-07-15 08:04:30.275759] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.606 [2024-07-15 08:04:30.275802] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.606 [2024-07-15 08:04:30.275808] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:45.606 [2024-07-15 08:04:30.275814] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:45.606 [2024-07-15 08:04:30.275819] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.606 [2024-07-15 08:04:30.275882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.607 [2024-07-15 08:04:30.276074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.607 [2024-07-15 08:04:30.275990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:45.607 [2024-07-15 08:04:30.276075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:46.558 08:04:30 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:46.558 08:04:30 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:28:46.558 08:04:30 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:28:46.558 08:04:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.558 08:04:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:46.558 INFO: Log level set to 20 00:28:46.558 INFO: Requests: 00:28:46.558 { 00:28:46.558 "jsonrpc": "2.0", 00:28:46.558 "method": "nvmf_set_config", 00:28:46.558 "id": 1, 00:28:46.558 "params": { 00:28:46.558 "admin_cmd_passthru": { 00:28:46.558 "identify_ctrlr": true 00:28:46.558 } 00:28:46.558 } 00:28:46.558 } 00:28:46.558 00:28:46.558 INFO: response: 00:28:46.558 { 00:28:46.558 "jsonrpc": "2.0", 00:28:46.558 "id": 1, 00:28:46.558 "result": true 00:28:46.558 } 00:28:46.558 00:28:46.558 08:04:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.558 08:04:30 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:28:46.558 08:04:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.558 08:04:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:46.558 INFO: Setting log level to 20 00:28:46.558 INFO: Setting log level to 20 00:28:46.558 INFO: Log level set to 20 00:28:46.558 INFO: Log level set to 20 00:28:46.558 INFO: Requests: 00:28:46.558 { 00:28:46.558 "jsonrpc": "2.0", 00:28:46.558 "method": "framework_start_init", 00:28:46.558 "id": 1 00:28:46.558 } 00:28:46.558 00:28:46.558 INFO: Requests: 00:28:46.558 { 00:28:46.558 "jsonrpc": "2.0", 00:28:46.558 "method": "framework_start_init", 00:28:46.558 "id": 1 00:28:46.558 } 00:28:46.558 00:28:46.558 [2024-07-15 08:04:31.037072] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:28:46.558 INFO: response: 00:28:46.558 { 00:28:46.558 "jsonrpc": "2.0", 00:28:46.558 "id": 1, 00:28:46.558 "result": true 00:28:46.558 } 00:28:46.558 00:28:46.558 INFO: response: 00:28:46.558 { 00:28:46.558 "jsonrpc": "2.0", 00:28:46.558 "id": 1, 00:28:46.558 "result": true 00:28:46.558 } 00:28:46.558 00:28:46.558 08:04:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.559 08:04:31 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:46.559 08:04:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.559 08:04:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:46.559 INFO: Setting log level to 40 00:28:46.559 INFO: Setting log level to 40 00:28:46.559 INFO: Setting log level to 40 00:28:46.559 [2024-07-15 08:04:31.050405] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:46.559 08:04:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.559 08:04:31 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:28:46.559 08:04:31 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:46.559 08:04:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:46.559 08:04:31 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:28:46.559 08:04:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.559 08:04:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:49.848 Nvme0n1 00:28:49.848 08:04:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.848 08:04:33 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:28:49.848 08:04:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.848 08:04:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:49.848 08:04:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.848 08:04:33 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:49.848 08:04:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.848 08:04:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:49.848 08:04:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.848 08:04:33 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:49.848 08:04:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.848 08:04:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:49.848 [2024-07-15 08:04:33.944624] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.848 08:04:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.848 08:04:33 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:28:49.848 08:04:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.848 08:04:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:49.848 [ 00:28:49.848 { 00:28:49.848 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:49.848 "subtype": "Discovery", 00:28:49.848 "listen_addresses": [], 00:28:49.848 "allow_any_host": true, 00:28:49.848 "hosts": [] 00:28:49.848 }, 00:28:49.848 { 00:28:49.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:49.848 "subtype": "NVMe", 00:28:49.848 "listen_addresses": [ 00:28:49.848 { 00:28:49.848 "trtype": "TCP", 00:28:49.848 "adrfam": "IPv4", 00:28:49.848 "traddr": "10.0.0.2", 00:28:49.848 "trsvcid": "4420" 00:28:49.848 } 00:28:49.848 ], 00:28:49.848 "allow_any_host": true, 00:28:49.848 "hosts": [], 00:28:49.848 "serial_number": "SPDK00000000000001", 00:28:49.848 "model_number": "SPDK bdev Controller", 00:28:49.848 "max_namespaces": 1, 00:28:49.848 "min_cntlid": 1, 00:28:49.848 "max_cntlid": 65519, 00:28:49.848 "namespaces": [ 00:28:49.848 { 00:28:49.848 "nsid": 1, 00:28:49.848 "bdev_name": "Nvme0n1", 00:28:49.848 "name": "Nvme0n1", 00:28:49.848 "nguid": "07B81A27E8534D18A2855CB7D77283D8", 00:28:49.848 "uuid": "07b81a27-e853-4d18-a285-5cb7d77283d8" 00:28:49.848 } 00:28:49.848 ] 00:28:49.848 } 00:28:49.848 ] 00:28:49.848 08:04:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.848 08:04:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:49.848 08:04:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:28:49.848 08:04:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:28:49.848 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.848 08:04:34 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:28:49.848 08:04:34 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:49.848 08:04:34 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:28:49.848 08:04:34 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:28:49.848 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.848 08:04:34 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:28:49.848 08:04:34 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:28:49.848 08:04:34 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:28:49.848 08:04:34 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:49.848 08:04:34 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.848 08:04:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:49.848 08:04:34 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.848 08:04:34 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:28:49.848 08:04:34 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:28:49.848 08:04:34 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:49.848 08:04:34 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:28:49.848 08:04:34 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:49.848 08:04:34 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:28:49.848 08:04:34 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:49.848 08:04:34 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:49.848 rmmod nvme_tcp 00:28:49.848 rmmod nvme_fabrics 00:28:49.848 rmmod nvme_keyring 00:28:49.848 08:04:34 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:49.848 08:04:34 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:28:49.848 08:04:34 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:28:49.848 08:04:34 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3423736 ']' 00:28:49.848 08:04:34 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3423736 00:28:49.848 08:04:34 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 3423736 ']' 00:28:49.848 08:04:34 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 3423736 00:28:49.848 08:04:34 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:28:49.848 08:04:34 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:49.848 08:04:34 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3423736 00:28:49.848 08:04:34 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:49.848 08:04:34 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:49.848 08:04:34 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3423736' 00:28:49.848 killing process with pid 3423736 00:28:49.848 08:04:34 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 3423736 00:28:49.848 08:04:34 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 3423736 00:28:51.222 08:04:35 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:51.222 08:04:35 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:51.222 08:04:35 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:51.222 08:04:35 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:51.222 08:04:35 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:51.222 08:04:35 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.481 08:04:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:51.482 08:04:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.388 08:04:38 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:53.388 00:28:53.388 real 0m22.218s 00:28:53.388 user 0m30.344s 00:28:53.388 sys 0m5.081s 00:28:53.388 08:04:38 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:53.388 08:04:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:53.388 ************************************ 00:28:53.388 END TEST nvmf_identify_passthru 00:28:53.388 ************************************ 00:28:53.388 08:04:38 -- common/autotest_common.sh@1142 -- # return 0 00:28:53.388 08:04:38 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:53.388 08:04:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:53.388 08:04:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:53.388 08:04:38 -- common/autotest_common.sh@10 -- # set +x 00:28:53.388 ************************************ 00:28:53.388 START TEST nvmf_dif 00:28:53.388 ************************************ 00:28:53.388 08:04:38 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:53.648 * Looking for test storage... 00:28:53.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:53.648 08:04:38 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:53.648 08:04:38 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:28:53.648 08:04:38 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:53.648 08:04:38 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:53.648 08:04:38 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:53.648 08:04:38 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:53.648 08:04:38 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:53.648 08:04:38 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:53.648 08:04:38 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:53.648 08:04:38 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:53.648 08:04:38 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:53.648 08:04:38 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:53.648 08:04:38 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:53.648 08:04:38 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:53.648 08:04:38 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:53.648 08:04:38 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:53.648 08:04:38 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:53.649 08:04:38 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:53.649 08:04:38 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:53.649 08:04:38 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:53.649 08:04:38 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:53.649 08:04:38 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:53.649 08:04:38 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.649 08:04:38 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.649 08:04:38 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.649 08:04:38 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:28:53.649 08:04:38 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.649 08:04:38 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:28:53.649 08:04:38 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:53.649 08:04:38 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:53.649 08:04:38 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:53.649 08:04:38 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:53.649 08:04:38 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:53.649 08:04:38 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:53.649 08:04:38 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:53.649 08:04:38 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:53.649 08:04:38 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:28:53.649 08:04:38 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:28:53.649 08:04:38 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:28:53.649 08:04:38 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:28:53.649 08:04:38 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:28:53.649 08:04:38 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:53.649 08:04:38 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:53.649 08:04:38 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:53.649 08:04:38 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:53.649 08:04:38 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:53.649 08:04:38 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.649 08:04:38 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:53.649 08:04:38 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.649 08:04:38 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:53.649 08:04:38 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:53.649 08:04:38 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:28:53.649 08:04:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:58.963 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:58.963 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:58.963 Found net devices under 0000:86:00.0: cvl_0_0 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:58.963 Found net devices under 0000:86:00.1: cvl_0_1 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:58.963 08:04:43 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:58.964 08:04:43 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:58.964 08:04:43 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:58.964 08:04:43 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:58.964 08:04:43 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:58.964 08:04:43 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:58.964 08:04:43 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:58.964 08:04:43 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:58.964 08:04:43 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:59.222 08:04:43 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:59.222 08:04:43 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:59.222 08:04:43 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:59.222 08:04:43 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:59.222 08:04:43 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:59.222 08:04:43 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:59.222 08:04:43 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:59.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:28:59.222 00:28:59.222 --- 10.0.0.2 ping statistics --- 00:28:59.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.222 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:28:59.222 08:04:43 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:59.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:28:59.222 00:28:59.222 --- 10.0.0.1 ping statistics --- 00:28:59.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.222 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:28:59.222 08:04:43 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.222 08:04:43 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:28:59.222 08:04:43 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:59.222 08:04:43 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:01.875 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:29:01.875 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:01.875 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:29:01.875 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:29:01.875 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:29:01.875 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:29:01.875 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:29:01.875 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:29:01.875 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:29:01.875 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:29:01.875 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:29:01.875 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:29:01.875 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:29:01.875 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:29:01.875 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:29:01.875 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:29:01.875 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:29:02.134 08:04:46 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:02.134 08:04:46 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:02.134 08:04:46 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:02.134 08:04:46 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:02.134 08:04:46 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:02.134 08:04:46 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:02.134 08:04:46 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:02.134 08:04:46 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:29:02.134 08:04:46 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:02.134 08:04:46 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:02.134 08:04:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:02.134 08:04:46 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3429302 00:29:02.134 08:04:46 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3429302 00:29:02.134 08:04:46 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:02.134 08:04:46 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 3429302 ']' 00:29:02.134 08:04:46 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.134 08:04:46 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:02.134 08:04:46 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.134 08:04:46 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:02.134 08:04:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:02.134 [2024-07-15 08:04:46.770907] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:29:02.134 [2024-07-15 08:04:46.770954] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:02.134 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.134 [2024-07-15 08:04:46.841911] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.393 [2024-07-15 08:04:46.919459] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:02.393 [2024-07-15 08:04:46.919492] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:02.393 [2024-07-15 08:04:46.919499] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:02.393 [2024-07-15 08:04:46.919505] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:02.393 [2024-07-15 08:04:46.919510] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:02.394 [2024-07-15 08:04:46.919528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.961 08:04:47 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:02.961 08:04:47 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:29:02.961 08:04:47 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:02.961 08:04:47 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:02.961 08:04:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:02.961 08:04:47 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:02.961 08:04:47 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:29:02.961 08:04:47 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:02.961 08:04:47 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.961 08:04:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:02.961 [2024-07-15 08:04:47.607644] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:02.961 08:04:47 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.961 08:04:47 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:02.961 08:04:47 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:02.961 08:04:47 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:02.961 08:04:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:02.961 ************************************ 00:29:02.961 START TEST fio_dif_1_default 00:29:02.961 ************************************ 00:29:02.961 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:29:02.961 08:04:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:29:02.961 08:04:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:29:02.961 08:04:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:29:02.961 08:04:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:29:02.961 08:04:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:29:02.961 08:04:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:02.961 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.961 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:02.961 bdev_null0 00:29:02.961 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:02.962 [2024-07-15 08:04:47.683945] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:02.962 { 00:29:02.962 "params": { 00:29:02.962 "name": "Nvme$subsystem", 00:29:02.962 "trtype": "$TEST_TRANSPORT", 00:29:02.962 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:02.962 "adrfam": "ipv4", 00:29:02.962 "trsvcid": "$NVMF_PORT", 00:29:02.962 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:02.962 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:02.962 "hdgst": ${hdgst:-false}, 00:29:02.962 "ddgst": ${ddgst:-false} 00:29:02.962 }, 00:29:02.962 "method": "bdev_nvme_attach_controller" 00:29:02.962 } 00:29:02.962 EOF 00:29:02.962 )") 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:29:02.962 08:04:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:02.962 "params": { 00:29:02.962 "name": "Nvme0", 00:29:02.962 "trtype": "tcp", 00:29:02.962 "traddr": "10.0.0.2", 00:29:02.962 "adrfam": "ipv4", 00:29:02.962 "trsvcid": "4420", 00:29:02.962 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:02.962 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:02.962 "hdgst": false, 00:29:02.962 "ddgst": false 00:29:02.962 }, 00:29:02.962 "method": "bdev_nvme_attach_controller" 00:29:02.962 }' 00:29:03.253 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:03.253 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:03.253 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:03.253 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:03.253 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:03.253 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:03.253 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:03.253 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:03.253 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:03.253 08:04:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:03.518 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:03.518 fio-3.35 00:29:03.518 Starting 1 thread 00:29:03.518 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.719 00:29:15.719 filename0: (groupid=0, jobs=1): err= 0: pid=3429811: Mon Jul 15 08:04:58 2024 00:29:15.719 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:29:15.719 slat (nsec): min=5630, max=65013, avg=6342.27, stdev=2327.06 00:29:15.719 clat (usec): min=40819, max=46824, avg=41010.06, stdev=381.14 00:29:15.719 lat (usec): min=40825, max=46856, avg=41016.40, stdev=381.69 00:29:15.719 clat percentiles (usec): 00:29:15.719 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:29:15.719 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:15.719 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:15.719 | 99.00th=[41157], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:29:15.719 | 99.99th=[46924] 00:29:15.719 bw ( KiB/s): min= 384, max= 416, per=99.49%, avg=388.80, stdev=11.72, samples=20 00:29:15.719 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:29:15.719 lat (msec) : 50=100.00% 00:29:15.719 cpu : usr=94.51%, sys=5.24%, ctx=10, majf=0, minf=295 00:29:15.719 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:15.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:15.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:15.719 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:15.719 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:15.719 00:29:15.719 Run status group 0 (all jobs): 00:29:15.719 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10011-10011msec 00:29:15.719 08:04:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:15.719 08:04:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:29:15.719 08:04:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:29:15.719 08:04:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:15.719 08:04:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:29:15.719 08:04:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:15.719 08:04:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.719 08:04:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:15.719 08:04:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.719 08:04:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:15.719 08:04:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.719 08:04:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:15.719 08:04:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.719 00:29:15.719 real 0m11.336s 00:29:15.719 user 0m16.111s 00:29:15.719 sys 0m0.878s 00:29:15.720 08:04:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:15.720 08:04:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:15.720 ************************************ 00:29:15.720 END TEST fio_dif_1_default 00:29:15.720 ************************************ 00:29:15.720 08:04:59 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:15.720 08:04:59 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:15.720 08:04:59 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:15.720 08:04:59 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:15.720 08:04:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:15.720 ************************************ 00:29:15.720 START TEST fio_dif_1_multi_subsystems 00:29:15.720 ************************************ 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:15.720 bdev_null0 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:15.720 [2024-07-15 08:04:59.090373] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:15.720 bdev_null1 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:15.720 { 00:29:15.720 "params": { 00:29:15.720 "name": "Nvme$subsystem", 00:29:15.720 "trtype": "$TEST_TRANSPORT", 00:29:15.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:15.720 "adrfam": "ipv4", 00:29:15.720 "trsvcid": "$NVMF_PORT", 00:29:15.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:15.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:15.720 "hdgst": ${hdgst:-false}, 00:29:15.720 "ddgst": ${ddgst:-false} 00:29:15.720 }, 00:29:15.720 "method": "bdev_nvme_attach_controller" 00:29:15.720 } 00:29:15.720 EOF 00:29:15.720 )") 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:15.720 { 00:29:15.720 "params": { 00:29:15.720 "name": "Nvme$subsystem", 00:29:15.720 "trtype": "$TEST_TRANSPORT", 00:29:15.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:15.720 "adrfam": "ipv4", 00:29:15.720 "trsvcid": "$NVMF_PORT", 00:29:15.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:15.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:15.720 "hdgst": ${hdgst:-false}, 00:29:15.720 "ddgst": ${ddgst:-false} 00:29:15.720 }, 00:29:15.720 "method": "bdev_nvme_attach_controller" 00:29:15.720 } 00:29:15.720 EOF 00:29:15.720 )") 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:15.720 "params": { 00:29:15.720 "name": "Nvme0", 00:29:15.720 "trtype": "tcp", 00:29:15.720 "traddr": "10.0.0.2", 00:29:15.720 "adrfam": "ipv4", 00:29:15.720 "trsvcid": "4420", 00:29:15.720 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:15.720 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:15.720 "hdgst": false, 00:29:15.720 "ddgst": false 00:29:15.720 }, 00:29:15.720 "method": "bdev_nvme_attach_controller" 00:29:15.720 },{ 00:29:15.720 "params": { 00:29:15.720 "name": "Nvme1", 00:29:15.720 "trtype": "tcp", 00:29:15.720 "traddr": "10.0.0.2", 00:29:15.720 "adrfam": "ipv4", 00:29:15.720 "trsvcid": "4420", 00:29:15.720 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:15.720 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:15.720 "hdgst": false, 00:29:15.720 "ddgst": false 00:29:15.720 }, 00:29:15.720 "method": "bdev_nvme_attach_controller" 00:29:15.720 }' 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:15.720 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:15.721 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:15.721 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:15.721 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:15.721 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:15.721 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:15.721 08:04:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:15.721 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:15.721 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:15.721 fio-3.35 00:29:15.721 Starting 2 threads 00:29:15.721 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.684 00:29:25.684 filename0: (groupid=0, jobs=1): err= 0: pid=3431780: Mon Jul 15 08:05:10 2024 00:29:25.684 read: IOPS=189, BW=759KiB/s (777kB/s)(7616KiB/10040msec) 00:29:25.684 slat (nsec): min=6034, max=47754, avg=7135.09, stdev=2207.54 00:29:25.684 clat (usec): min=399, max=43317, avg=21070.62, stdev=20485.84 00:29:25.684 lat (usec): min=405, max=43350, avg=21077.76, stdev=20485.19 00:29:25.684 clat percentiles (usec): 00:29:25.684 | 1.00th=[ 412], 5.00th=[ 449], 10.00th=[ 461], 20.00th=[ 474], 00:29:25.684 | 30.00th=[ 486], 40.00th=[ 578], 50.00th=[40633], 60.00th=[41157], 00:29:25.684 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:29:25.684 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:29:25.684 | 99.99th=[43254] 00:29:25.684 bw ( KiB/s): min= 670, max= 768, per=66.15%, avg=759.90, stdev=25.53, samples=20 00:29:25.684 iops : min= 167, max= 192, avg=189.95, stdev= 6.48, samples=20 00:29:25.684 lat (usec) : 500=34.45%, 750=15.13%, 1000=0.21% 00:29:25.684 lat (msec) : 50=50.21% 00:29:25.684 cpu : usr=97.57%, sys=2.17%, ctx=13, majf=0, minf=197 00:29:25.684 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:25.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:25.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:25.684 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:25.684 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:25.684 filename1: (groupid=0, jobs=1): err= 0: pid=3431781: Mon Jul 15 08:05:10 2024 00:29:25.684 read: IOPS=97, BW=390KiB/s (400kB/s)(3904KiB/10006msec) 00:29:25.684 slat (nsec): min=5990, max=33113, avg=7779.91, stdev=2596.17 00:29:25.684 clat (usec): min=40740, max=42021, avg=40982.93, stdev=91.64 00:29:25.684 lat (usec): min=40747, max=42032, avg=40990.71, stdev=91.92 00:29:25.684 clat percentiles (usec): 00:29:25.684 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:29:25.684 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:25.684 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:25.684 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:29:25.684 | 99.99th=[42206] 00:29:25.684 bw ( KiB/s): min= 383, max= 416, per=33.82%, avg=389.00, stdev=12.01, samples=19 00:29:25.684 iops : min= 95, max= 104, avg=97.21, stdev= 3.03, samples=19 00:29:25.684 lat (msec) : 50=100.00% 00:29:25.684 cpu : usr=97.68%, sys=2.07%, ctx=13, majf=0, minf=101 00:29:25.684 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:25.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:25.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:25.684 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:25.684 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:25.684 00:29:25.684 Run status group 0 (all jobs): 00:29:25.684 READ: bw=1147KiB/s (1175kB/s), 390KiB/s-759KiB/s (400kB/s-777kB/s), io=11.2MiB (11.8MB), run=10006-10040msec 00:29:25.684 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:29:25.684 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:29:25.684 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:25.684 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:25.684 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:29:25.684 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:25.684 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.684 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:25.684 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.684 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:25.684 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.684 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:25.684 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.684 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:25.684 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:25.684 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:29:25.684 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:25.684 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.684 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:25.684 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.684 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:25.942 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.942 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:25.942 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.942 00:29:25.942 real 0m11.389s 00:29:25.942 user 0m26.818s 00:29:25.942 sys 0m0.795s 00:29:25.942 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:25.942 08:05:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:25.942 ************************************ 00:29:25.942 END TEST fio_dif_1_multi_subsystems 00:29:25.942 ************************************ 00:29:25.942 08:05:10 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:25.942 08:05:10 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:29:25.942 08:05:10 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:25.942 08:05:10 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:25.942 08:05:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:25.942 ************************************ 00:29:25.942 START TEST fio_dif_rand_params 00:29:25.942 ************************************ 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:25.942 bdev_null0 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:25.942 [2024-07-15 08:05:10.547015] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:25.942 { 00:29:25.942 "params": { 00:29:25.942 "name": "Nvme$subsystem", 00:29:25.942 "trtype": "$TEST_TRANSPORT", 00:29:25.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.942 "adrfam": "ipv4", 00:29:25.942 "trsvcid": "$NVMF_PORT", 00:29:25.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.942 "hdgst": ${hdgst:-false}, 00:29:25.942 "ddgst": ${ddgst:-false} 00:29:25.942 }, 00:29:25.942 "method": "bdev_nvme_attach_controller" 00:29:25.942 } 00:29:25.942 EOF 00:29:25.942 )") 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:25.942 "params": { 00:29:25.942 "name": "Nvme0", 00:29:25.942 "trtype": "tcp", 00:29:25.942 "traddr": "10.0.0.2", 00:29:25.942 "adrfam": "ipv4", 00:29:25.942 "trsvcid": "4420", 00:29:25.942 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:25.942 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:25.942 "hdgst": false, 00:29:25.942 "ddgst": false 00:29:25.942 }, 00:29:25.942 "method": "bdev_nvme_attach_controller" 00:29:25.942 }' 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:25.942 08:05:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:26.200 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:26.200 ... 00:29:26.200 fio-3.35 00:29:26.200 Starting 3 threads 00:29:26.200 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.761 00:29:32.761 filename0: (groupid=0, jobs=1): err= 0: pid=3433696: Mon Jul 15 08:05:16 2024 00:29:32.761 read: IOPS=304, BW=38.0MiB/s (39.9MB/s)(192MiB/5048msec) 00:29:32.761 slat (nsec): min=6308, max=92729, avg=12196.89, stdev=3811.23 00:29:32.761 clat (usec): min=3305, max=51102, avg=9815.55, stdev=9630.90 00:29:32.761 lat (usec): min=3314, max=51112, avg=9827.74, stdev=9630.99 00:29:32.761 clat percentiles (usec): 00:29:32.761 | 1.00th=[ 3982], 5.00th=[ 4948], 10.00th=[ 5407], 20.00th=[ 6259], 00:29:32.761 | 30.00th=[ 7177], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8160], 00:29:32.761 | 70.00th=[ 8356], 80.00th=[ 8586], 90.00th=[ 9241], 95.00th=[46400], 00:29:32.761 | 99.00th=[49546], 99.50th=[50070], 99.90th=[50594], 99.95th=[51119], 00:29:32.761 | 99.99th=[51119] 00:29:32.761 bw ( KiB/s): min=26880, max=51712, per=33.29%, avg=39244.80, stdev=9388.26, samples=10 00:29:32.761 iops : min= 210, max= 404, avg=306.60, stdev=73.35, samples=10 00:29:32.761 lat (msec) : 4=1.24%, 10=92.25%, 20=0.72%, 50=5.21%, 100=0.59% 00:29:32.761 cpu : usr=87.28%, sys=7.09%, ctx=360, majf=0, minf=190 00:29:32.761 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:32.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:32.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:32.761 issued rwts: total=1536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:32.761 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:32.761 filename0: (groupid=0, jobs=1): err= 0: pid=3433697: Mon Jul 15 08:05:16 2024 00:29:32.761 read: IOPS=320, BW=40.0MiB/s (42.0MB/s)(200MiB/5003msec) 00:29:32.761 slat (nsec): min=6268, max=25658, avg=10554.02, stdev=2277.40 00:29:32.761 clat (usec): min=2959, max=52554, avg=9355.01, stdev=8622.01 00:29:32.761 lat (usec): min=2965, max=52566, avg=9365.56, stdev=8622.07 00:29:32.761 clat percentiles (usec): 00:29:32.761 | 1.00th=[ 3294], 5.00th=[ 4015], 10.00th=[ 5276], 20.00th=[ 5932], 00:29:32.761 | 30.00th=[ 6849], 40.00th=[ 7767], 50.00th=[ 8160], 60.00th=[ 8455], 00:29:32.761 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[10290], 00:29:32.761 | 99.00th=[49546], 99.50th=[50070], 99.90th=[51643], 99.95th=[52691], 00:29:32.761 | 99.99th=[52691] 00:29:32.761 bw ( KiB/s): min=28416, max=51456, per=34.75%, avg=40960.00, stdev=7100.64, samples=10 00:29:32.761 iops : min= 222, max= 402, avg=320.00, stdev=55.47, samples=10 00:29:32.761 lat (msec) : 4=4.99%, 10=89.33%, 20=1.19%, 50=3.56%, 100=0.94% 00:29:32.761 cpu : usr=96.70%, sys=2.98%, ctx=17, majf=0, minf=113 00:29:32.761 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:32.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:32.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:32.761 issued rwts: total=1602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:32.761 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:32.761 filename0: (groupid=0, jobs=1): err= 0: pid=3433698: Mon Jul 15 08:05:16 2024 00:29:32.761 read: IOPS=301, BW=37.7MiB/s (39.6MB/s)(189MiB/5005msec) 00:29:32.761 slat (nsec): min=6312, max=27060, avg=10759.84, stdev=2329.65 00:29:32.761 clat (usec): min=3167, max=52138, avg=9922.91, stdev=6598.72 00:29:32.761 lat (usec): min=3173, max=52150, avg=9933.67, stdev=6599.01 00:29:32.761 clat percentiles (usec): 00:29:32.761 | 1.00th=[ 3720], 5.00th=[ 3982], 10.00th=[ 5080], 20.00th=[ 6390], 00:29:32.761 | 30.00th=[ 7111], 40.00th=[ 7898], 50.00th=[ 8979], 60.00th=[10552], 00:29:32.761 | 70.00th=[11600], 80.00th=[12125], 90.00th=[12780], 95.00th=[13173], 00:29:32.761 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51643], 99.95th=[52167], 00:29:32.761 | 99.99th=[52167] 00:29:32.761 bw ( KiB/s): min=28416, max=48640, per=32.75%, avg=38604.80, stdev=6225.01, samples=10 00:29:32.761 iops : min= 222, max= 380, avg=301.60, stdev=48.63, samples=10 00:29:32.761 lat (msec) : 4=5.23%, 10=51.69%, 20=40.70%, 50=1.79%, 100=0.60% 00:29:32.761 cpu : usr=95.88%, sys=3.80%, ctx=8, majf=0, minf=62 00:29:32.761 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:32.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:32.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:32.761 issued rwts: total=1511,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:32.761 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:32.761 00:29:32.761 Run status group 0 (all jobs): 00:29:32.761 READ: bw=115MiB/s (121MB/s), 37.7MiB/s-40.0MiB/s (39.6MB/s-42.0MB/s), io=581MiB (609MB), run=5003-5048msec 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:32.761 bdev_null0 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:32.761 [2024-07-15 08:05:16.679342] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:32.761 bdev_null1 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:32.761 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:32.762 bdev_null2 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:32.762 { 00:29:32.762 "params": { 00:29:32.762 "name": "Nvme$subsystem", 00:29:32.762 "trtype": "$TEST_TRANSPORT", 00:29:32.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.762 "adrfam": "ipv4", 00:29:32.762 "trsvcid": "$NVMF_PORT", 00:29:32.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.762 "hdgst": ${hdgst:-false}, 00:29:32.762 "ddgst": ${ddgst:-false} 00:29:32.762 }, 00:29:32.762 "method": "bdev_nvme_attach_controller" 00:29:32.762 } 00:29:32.762 EOF 00:29:32.762 )") 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:32.762 { 00:29:32.762 "params": { 00:29:32.762 "name": "Nvme$subsystem", 00:29:32.762 "trtype": "$TEST_TRANSPORT", 00:29:32.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.762 "adrfam": "ipv4", 00:29:32.762 "trsvcid": "$NVMF_PORT", 00:29:32.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.762 "hdgst": ${hdgst:-false}, 00:29:32.762 "ddgst": ${ddgst:-false} 00:29:32.762 }, 00:29:32.762 "method": "bdev_nvme_attach_controller" 00:29:32.762 } 00:29:32.762 EOF 00:29:32.762 )") 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:32.762 { 00:29:32.762 "params": { 00:29:32.762 "name": "Nvme$subsystem", 00:29:32.762 "trtype": "$TEST_TRANSPORT", 00:29:32.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.762 "adrfam": "ipv4", 00:29:32.762 "trsvcid": "$NVMF_PORT", 00:29:32.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.762 "hdgst": ${hdgst:-false}, 00:29:32.762 "ddgst": ${ddgst:-false} 00:29:32.762 }, 00:29:32.762 "method": "bdev_nvme_attach_controller" 00:29:32.762 } 00:29:32.762 EOF 00:29:32.762 )") 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:32.762 "params": { 00:29:32.762 "name": "Nvme0", 00:29:32.762 "trtype": "tcp", 00:29:32.762 "traddr": "10.0.0.2", 00:29:32.762 "adrfam": "ipv4", 00:29:32.762 "trsvcid": "4420", 00:29:32.762 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:32.762 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:32.762 "hdgst": false, 00:29:32.762 "ddgst": false 00:29:32.762 }, 00:29:32.762 "method": "bdev_nvme_attach_controller" 00:29:32.762 },{ 00:29:32.762 "params": { 00:29:32.762 "name": "Nvme1", 00:29:32.762 "trtype": "tcp", 00:29:32.762 "traddr": "10.0.0.2", 00:29:32.762 "adrfam": "ipv4", 00:29:32.762 "trsvcid": "4420", 00:29:32.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:32.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:32.762 "hdgst": false, 00:29:32.762 "ddgst": false 00:29:32.762 }, 00:29:32.762 "method": "bdev_nvme_attach_controller" 00:29:32.762 },{ 00:29:32.762 "params": { 00:29:32.762 "name": "Nvme2", 00:29:32.762 "trtype": "tcp", 00:29:32.762 "traddr": "10.0.0.2", 00:29:32.762 "adrfam": "ipv4", 00:29:32.762 "trsvcid": "4420", 00:29:32.762 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:32.762 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:32.762 "hdgst": false, 00:29:32.762 "ddgst": false 00:29:32.762 }, 00:29:32.762 "method": "bdev_nvme_attach_controller" 00:29:32.762 }' 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:32.762 08:05:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:32.762 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:32.762 ... 00:29:32.762 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:32.762 ... 00:29:32.762 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:32.762 ... 00:29:32.762 fio-3.35 00:29:32.762 Starting 24 threads 00:29:32.762 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.954 00:29:44.954 filename0: (groupid=0, jobs=1): err= 0: pid=3434796: Mon Jul 15 08:05:28 2024 00:29:44.954 read: IOPS=567, BW=2269KiB/s (2324kB/s)(22.4MiB/10097msec) 00:29:44.954 slat (nsec): min=6925, max=47135, avg=16036.35, stdev=6732.64 00:29:44.954 clat (msec): min=7, max=102, avg=28.07, stdev= 4.22 00:29:44.954 lat (msec): min=7, max=102, avg=28.09, stdev= 4.22 00:29:44.954 clat percentiles (msec): 00:29:44.954 | 1.00th=[ 23], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:29:44.954 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:29:44.954 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:29:44.954 | 99.00th=[ 34], 99.50th=[ 34], 99.90th=[ 103], 99.95th=[ 103], 00:29:44.954 | 99.99th=[ 103] 00:29:44.954 bw ( KiB/s): min= 2176, max= 2304, per=4.21%, avg=2284.80, stdev=46.89, samples=20 00:29:44.954 iops : min= 544, max= 576, avg=571.20, stdev=11.72, samples=20 00:29:44.954 lat (msec) : 10=0.24%, 20=0.31%, 50=99.16%, 250=0.28% 00:29:44.954 cpu : usr=98.77%, sys=0.82%, ctx=17, majf=0, minf=9 00:29:44.954 IO depths : 1=5.9%, 2=12.0%, 4=24.6%, 8=50.9%, 16=6.6%, 32=0.0%, >=64=0.0% 00:29:44.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.954 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.954 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:44.954 filename0: (groupid=0, jobs=1): err= 0: pid=3434797: Mon Jul 15 08:05:28 2024 00:29:44.954 read: IOPS=580, BW=2322KiB/s (2378kB/s)(22.7MiB/10013msec) 00:29:44.954 slat (nsec): min=6828, max=83700, avg=37839.00, stdev=18809.73 00:29:44.954 clat (usec): min=3865, max=37858, avg=27194.04, stdev=2557.47 00:29:44.954 lat (usec): min=3878, max=37865, avg=27231.88, stdev=2563.51 00:29:44.954 clat percentiles (usec): 00:29:44.954 | 1.00th=[11076], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:29:44.954 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:44.954 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:29:44.954 | 99.00th=[28443], 99.50th=[28705], 99.90th=[34341], 99.95th=[38011], 00:29:44.954 | 99.99th=[38011] 00:29:44.954 bw ( KiB/s): min= 2176, max= 3112, per=4.27%, avg=2318.80, stdev=193.86, samples=20 00:29:44.954 iops : min= 544, max= 778, avg=579.70, stdev=48.46, samples=20 00:29:44.954 lat (msec) : 4=0.12%, 10=0.55%, 20=2.72%, 50=96.61% 00:29:44.954 cpu : usr=98.97%, sys=0.63%, ctx=12, majf=0, minf=9 00:29:44.954 IO depths : 1=5.9%, 2=11.8%, 4=23.9%, 8=51.7%, 16=6.6%, 32=0.0%, >=64=0.0% 00:29:44.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.954 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.954 issued rwts: total=5813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:44.954 filename0: (groupid=0, jobs=1): err= 0: pid=3434798: Mon Jul 15 08:05:28 2024 00:29:44.954 read: IOPS=567, BW=2268KiB/s (2322kB/s)(22.4MiB/10095msec) 00:29:44.954 slat (nsec): min=5720, max=77818, avg=25951.91, stdev=15677.40 00:29:44.954 clat (msec): min=11, max=101, avg=27.99, stdev= 4.13 00:29:44.954 lat (msec): min=11, max=101, avg=28.01, stdev= 4.13 00:29:44.954 clat percentiles (msec): 00:29:44.954 | 1.00th=[ 21], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:29:44.954 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:29:44.954 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:29:44.954 | 99.00th=[ 34], 99.50th=[ 41], 99.90th=[ 101], 99.95th=[ 101], 00:29:44.954 | 99.99th=[ 103] 00:29:44.954 bw ( KiB/s): min= 2176, max= 2368, per=4.21%, avg=2283.20, stdev=57.13, samples=20 00:29:44.954 iops : min= 544, max= 592, avg=570.80, stdev=14.28, samples=20 00:29:44.954 lat (msec) : 20=1.00%, 50=98.72%, 100=0.05%, 250=0.23% 00:29:44.954 cpu : usr=98.82%, sys=0.78%, ctx=15, majf=0, minf=9 00:29:44.954 IO depths : 1=6.0%, 2=12.0%, 4=24.5%, 8=51.0%, 16=6.6%, 32=0.0%, >=64=0.0% 00:29:44.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.954 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.954 issued rwts: total=5724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:44.954 filename0: (groupid=0, jobs=1): err= 0: pid=3434799: Mon Jul 15 08:05:28 2024 00:29:44.954 read: IOPS=563, BW=2254KiB/s (2308kB/s)(22.2MiB/10081msec) 00:29:44.954 slat (nsec): min=8187, max=54931, avg=26532.13, stdev=8113.22 00:29:44.954 clat (msec): min=18, max=100, avg=28.16, stdev= 4.27 00:29:44.954 lat (msec): min=18, max=100, avg=28.19, stdev= 4.27 00:29:44.954 clat percentiles (msec): 00:29:44.954 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:29:44.954 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:29:44.954 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:29:44.954 | 99.00th=[ 29], 99.50th=[ 63], 99.90th=[ 101], 99.95th=[ 101], 00:29:44.954 | 99.99th=[ 101] 00:29:44.954 bw ( KiB/s): min= 2048, max= 2304, per=4.19%, avg=2270.32, stdev=71.93, samples=19 00:29:44.954 iops : min= 512, max= 576, avg=567.58, stdev=17.98, samples=19 00:29:44.954 lat (msec) : 20=0.04%, 50=99.40%, 100=0.28%, 250=0.28% 00:29:44.954 cpu : usr=98.67%, sys=0.94%, ctx=15, majf=0, minf=9 00:29:44.954 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:44.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.955 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.955 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:44.955 filename0: (groupid=0, jobs=1): err= 0: pid=3434800: Mon Jul 15 08:05:28 2024 00:29:44.955 read: IOPS=563, BW=2255KiB/s (2309kB/s)(22.2MiB/10074msec) 00:29:44.955 slat (nsec): min=5227, max=55458, avg=25973.57, stdev=8175.86 00:29:44.955 clat (msec): min=26, max=100, avg=28.13, stdev= 4.12 00:29:44.955 lat (msec): min=27, max=100, avg=28.16, stdev= 4.12 00:29:44.955 clat percentiles (msec): 00:29:44.955 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:29:44.955 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:29:44.955 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 29], 00:29:44.955 | 99.00th=[ 29], 99.50th=[ 55], 99.90th=[ 101], 99.95th=[ 102], 00:29:44.955 | 99.99th=[ 102] 00:29:44.955 bw ( KiB/s): min= 2052, max= 2304, per=4.19%, avg=2270.53, stdev=71.25, samples=19 00:29:44.955 iops : min= 513, max= 576, avg=567.63, stdev=17.81, samples=19 00:29:44.955 lat (msec) : 50=99.44%, 100=0.28%, 250=0.28% 00:29:44.955 cpu : usr=98.79%, sys=0.82%, ctx=14, majf=0, minf=9 00:29:44.955 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:44.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.955 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.955 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:44.955 filename0: (groupid=0, jobs=1): err= 0: pid=3434801: Mon Jul 15 08:05:28 2024 00:29:44.955 read: IOPS=564, BW=2259KiB/s (2313kB/s)(22.2MiB/10088msec) 00:29:44.955 slat (nsec): min=6068, max=50089, avg=22281.07, stdev=8879.17 00:29:44.955 clat (msec): min=24, max=100, avg=28.17, stdev= 3.96 00:29:44.955 lat (msec): min=24, max=100, avg=28.19, stdev= 3.96 00:29:44.955 clat percentiles (msec): 00:29:44.955 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:29:44.955 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:29:44.955 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:29:44.955 | 99.00th=[ 29], 99.50th=[ 45], 99.90th=[ 102], 99.95th=[ 102], 00:29:44.955 | 99.99th=[ 102] 00:29:44.955 bw ( KiB/s): min= 2176, max= 2304, per=4.19%, avg=2270.53, stdev=57.55, samples=19 00:29:44.955 iops : min= 544, max= 576, avg=567.63, stdev=14.39, samples=19 00:29:44.955 lat (msec) : 50=99.72%, 250=0.28% 00:29:44.955 cpu : usr=99.00%, sys=0.60%, ctx=8, majf=0, minf=9 00:29:44.955 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:44.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.955 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.955 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:44.955 filename0: (groupid=0, jobs=1): err= 0: pid=3434802: Mon Jul 15 08:05:28 2024 00:29:44.955 read: IOPS=564, BW=2259KiB/s (2313kB/s)(22.2MiB/10086msec) 00:29:44.955 slat (nsec): min=5095, max=42728, avg=20813.24, stdev=5732.69 00:29:44.955 clat (msec): min=19, max=102, avg=28.14, stdev= 4.04 00:29:44.955 lat (msec): min=19, max=102, avg=28.17, stdev= 4.04 00:29:44.955 clat percentiles (msec): 00:29:44.955 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:29:44.955 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:29:44.955 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:29:44.955 | 99.00th=[ 29], 99.50th=[ 41], 99.90th=[ 103], 99.95th=[ 104], 00:29:44.955 | 99.99th=[ 104] 00:29:44.955 bw ( KiB/s): min= 2176, max= 2304, per=4.20%, avg=2277.05, stdev=53.61, samples=19 00:29:44.955 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:29:44.955 lat (msec) : 20=0.04%, 50=99.68%, 250=0.28% 00:29:44.955 cpu : usr=98.98%, sys=0.63%, ctx=14, majf=0, minf=9 00:29:44.955 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:44.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.955 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.955 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:44.955 filename0: (groupid=0, jobs=1): err= 0: pid=3434803: Mon Jul 15 08:05:28 2024 00:29:44.955 read: IOPS=564, BW=2259KiB/s (2313kB/s)(22.2MiB/10088msec) 00:29:44.955 slat (nsec): min=7483, max=49979, avg=23013.48, stdev=8609.89 00:29:44.955 clat (msec): min=18, max=100, avg=28.16, stdev= 3.96 00:29:44.955 lat (msec): min=18, max=100, avg=28.18, stdev= 3.96 00:29:44.955 clat percentiles (msec): 00:29:44.955 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:29:44.955 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:29:44.955 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:29:44.955 | 99.00th=[ 29], 99.50th=[ 45], 99.90th=[ 101], 99.95th=[ 101], 00:29:44.955 | 99.99th=[ 101] 00:29:44.955 bw ( KiB/s): min= 2176, max= 2304, per=4.19%, avg=2270.32, stdev=57.91, samples=19 00:29:44.955 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:29:44.955 lat (msec) : 20=0.04%, 50=99.68%, 250=0.28% 00:29:44.955 cpu : usr=98.33%, sys=1.26%, ctx=31, majf=0, minf=9 00:29:44.955 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:44.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.955 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.955 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:44.955 filename1: (groupid=0, jobs=1): err= 0: pid=3434804: Mon Jul 15 08:05:28 2024 00:29:44.955 read: IOPS=575, BW=2301KiB/s (2356kB/s)(22.6MiB/10070msec) 00:29:44.955 slat (nsec): min=5193, max=75289, avg=14821.29, stdev=10967.91 00:29:44.955 clat (msec): min=15, max=100, avg=27.65, stdev= 5.07 00:29:44.955 lat (msec): min=15, max=100, avg=27.66, stdev= 5.07 00:29:44.955 clat percentiles (msec): 00:29:44.955 | 1.00th=[ 18], 5.00th=[ 21], 10.00th=[ 23], 20.00th=[ 25], 00:29:44.955 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:29:44.955 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 33], 95.00th=[ 34], 00:29:44.955 | 99.00th=[ 40], 99.50th=[ 53], 99.90th=[ 102], 99.95th=[ 102], 00:29:44.955 | 99.99th=[ 102] 00:29:44.955 bw ( KiB/s): min= 1984, max= 2416, per=4.27%, avg=2318.32, stdev=93.58, samples=19 00:29:44.955 iops : min= 496, max= 604, avg=579.58, stdev=23.40, samples=19 00:29:44.955 lat (msec) : 20=2.24%, 50=97.20%, 100=0.45%, 250=0.10% 00:29:44.955 cpu : usr=98.86%, sys=0.74%, ctx=10, majf=0, minf=9 00:29:44.955 IO depths : 1=0.1%, 2=0.3%, 4=3.0%, 8=80.5%, 16=16.2%, 32=0.0%, >=64=0.0% 00:29:44.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.955 complete : 0=0.0%, 4=89.1%, 8=9.0%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.955 issued rwts: total=5792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:44.955 filename1: (groupid=0, jobs=1): err= 0: pid=3434805: Mon Jul 15 08:05:28 2024 00:29:44.955 read: IOPS=563, BW=2255KiB/s (2309kB/s)(22.2MiB/10074msec) 00:29:44.955 slat (nsec): min=5105, max=56413, avg=26591.42, stdev=8026.15 00:29:44.955 clat (msec): min=18, max=100, avg=28.14, stdev= 4.12 00:29:44.955 lat (msec): min=18, max=100, avg=28.17, stdev= 4.12 00:29:44.955 clat percentiles (msec): 00:29:44.955 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:29:44.955 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:29:44.955 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:29:44.955 | 99.00th=[ 29], 99.50th=[ 55], 99.90th=[ 101], 99.95th=[ 102], 00:29:44.955 | 99.99th=[ 102] 00:29:44.955 bw ( KiB/s): min= 2052, max= 2304, per=4.19%, avg=2270.53, stdev=71.25, samples=19 00:29:44.955 iops : min= 513, max= 576, avg=567.63, stdev=17.81, samples=19 00:29:44.955 lat (msec) : 20=0.04%, 50=99.40%, 100=0.28%, 250=0.28% 00:29:44.955 cpu : usr=98.85%, sys=0.77%, ctx=16, majf=0, minf=9 00:29:44.956 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:44.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.956 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.956 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:44.956 filename1: (groupid=0, jobs=1): err= 0: pid=3434806: Mon Jul 15 08:05:28 2024 00:29:44.956 read: IOPS=562, BW=2251KiB/s (2305kB/s)(22.1MiB/10066msec) 00:29:44.956 slat (nsec): min=7101, max=41914, avg=19968.16, stdev=5638.54 00:29:44.956 clat (msec): min=27, max=102, avg=28.25, stdev= 4.64 00:29:44.956 lat (msec): min=27, max=102, avg=28.27, stdev= 4.64 00:29:44.956 clat percentiles (msec): 00:29:44.956 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:29:44.956 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:29:44.956 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:29:44.956 | 99.00th=[ 29], 99.50th=[ 73], 99.90th=[ 104], 99.95th=[ 104], 00:29:44.956 | 99.99th=[ 104] 00:29:44.956 bw ( KiB/s): min= 1923, max= 2304, per=4.17%, avg=2263.74, stdev=95.31, samples=19 00:29:44.956 iops : min= 480, max= 576, avg=565.89, stdev=23.98, samples=19 00:29:44.956 lat (msec) : 50=99.44%, 100=0.28%, 250=0.28% 00:29:44.956 cpu : usr=98.92%, sys=0.70%, ctx=13, majf=0, minf=9 00:29:44.956 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:44.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.956 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.956 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:44.956 filename1: (groupid=0, jobs=1): err= 0: pid=3434807: Mon Jul 15 08:05:28 2024 00:29:44.956 read: IOPS=563, BW=2256KiB/s (2310kB/s)(22.2MiB/10072msec) 00:29:44.956 slat (nsec): min=5190, max=55447, avg=24963.29, stdev=8007.06 00:29:44.956 clat (msec): min=26, max=102, avg=28.13, stdev= 4.05 00:29:44.956 lat (msec): min=26, max=102, avg=28.15, stdev= 4.05 00:29:44.956 clat percentiles (msec): 00:29:44.956 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:29:44.956 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:29:44.956 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:29:44.956 | 99.00th=[ 29], 99.50th=[ 51], 99.90th=[ 102], 99.95th=[ 102], 00:29:44.956 | 99.99th=[ 103] 00:29:44.956 bw ( KiB/s): min= 2052, max= 2304, per=4.19%, avg=2270.53, stdev=71.25, samples=19 00:29:44.956 iops : min= 513, max= 576, avg=567.63, stdev=17.81, samples=19 00:29:44.956 lat (msec) : 50=99.44%, 100=0.28%, 250=0.28% 00:29:44.956 cpu : usr=98.96%, sys=0.66%, ctx=8, majf=0, minf=9 00:29:44.956 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:44.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.956 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.956 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:44.956 filename1: (groupid=0, jobs=1): err= 0: pid=3434808: Mon Jul 15 08:05:28 2024 00:29:44.956 read: IOPS=572, BW=2288KiB/s (2343kB/s)(22.4MiB/10012msec) 00:29:44.956 slat (nsec): min=6945, max=83657, avg=39048.83, stdev=18690.09 00:29:44.956 clat (usec): min=7123, max=33283, avg=27574.00, stdev=1616.53 00:29:44.956 lat (usec): min=7130, max=33318, avg=27613.05, stdev=1618.56 00:29:44.956 clat percentiles (usec): 00:29:44.956 | 1.00th=[27132], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:29:44.956 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:44.956 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:29:44.956 | 99.00th=[28705], 99.50th=[28705], 99.90th=[32900], 99.95th=[33162], 00:29:44.956 | 99.99th=[33162] 00:29:44.956 bw ( KiB/s): min= 2176, max= 2436, per=4.21%, avg=2285.00, stdev=63.14, samples=20 00:29:44.956 iops : min= 544, max= 609, avg=571.25, stdev=15.78, samples=20 00:29:44.956 lat (msec) : 10=0.56%, 20=0.28%, 50=99.16% 00:29:44.956 cpu : usr=98.74%, sys=0.87%, ctx=12, majf=0, minf=9 00:29:44.956 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:44.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.956 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.956 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:44.956 filename1: (groupid=0, jobs=1): err= 0: pid=3434809: Mon Jul 15 08:05:28 2024 00:29:44.956 read: IOPS=563, BW=2253KiB/s (2307kB/s)(22.2MiB/10083msec) 00:29:44.956 slat (nsec): min=7570, max=54868, avg=24877.20, stdev=8697.61 00:29:44.956 clat (msec): min=26, max=100, avg=28.21, stdev= 4.31 00:29:44.956 lat (msec): min=26, max=100, avg=28.23, stdev= 4.31 00:29:44.956 clat percentiles (msec): 00:29:44.956 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:29:44.956 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:29:44.956 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:29:44.956 | 99.00th=[ 29], 99.50th=[ 65], 99.90th=[ 101], 99.95th=[ 102], 00:29:44.956 | 99.99th=[ 102] 00:29:44.956 bw ( KiB/s): min= 2048, max= 2304, per=4.19%, avg=2270.32, stdev=71.93, samples=19 00:29:44.956 iops : min= 512, max= 576, avg=567.58, stdev=17.98, samples=19 00:29:44.956 lat (msec) : 50=99.44%, 100=0.28%, 250=0.28% 00:29:44.956 cpu : usr=98.88%, sys=0.73%, ctx=12, majf=0, minf=9 00:29:44.956 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:44.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.956 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.956 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:44.956 filename1: (groupid=0, jobs=1): err= 0: pid=3434810: Mon Jul 15 08:05:28 2024 00:29:44.956 read: IOPS=562, BW=2251KiB/s (2305kB/s)(22.1MiB/10066msec) 00:29:44.956 slat (nsec): min=6634, max=44163, avg=19709.71, stdev=5731.75 00:29:44.956 clat (msec): min=27, max=102, avg=28.25, stdev= 4.64 00:29:44.956 lat (msec): min=27, max=102, avg=28.27, stdev= 4.64 00:29:44.956 clat percentiles (msec): 00:29:44.956 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:29:44.956 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:29:44.956 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:29:44.956 | 99.00th=[ 29], 99.50th=[ 73], 99.90th=[ 104], 99.95th=[ 104], 00:29:44.956 | 99.99th=[ 104] 00:29:44.956 bw ( KiB/s): min= 1923, max= 2304, per=4.17%, avg=2263.74, stdev=95.31, samples=19 00:29:44.956 iops : min= 480, max= 576, avg=565.89, stdev=23.98, samples=19 00:29:44.956 lat (msec) : 50=99.44%, 100=0.28%, 250=0.28% 00:29:44.956 cpu : usr=98.77%, sys=0.84%, ctx=10, majf=0, minf=9 00:29:44.956 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:44.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.956 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.956 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:44.956 filename1: (groupid=0, jobs=1): err= 0: pid=3434811: Mon Jul 15 08:05:28 2024 00:29:44.956 read: IOPS=564, BW=2259KiB/s (2313kB/s)(22.2MiB/10087msec) 00:29:44.956 slat (nsec): min=7770, max=46080, avg=20203.65, stdev=6048.51 00:29:44.956 clat (msec): min=21, max=102, avg=28.16, stdev= 4.05 00:29:44.956 lat (msec): min=21, max=102, avg=28.18, stdev= 4.05 00:29:44.956 clat percentiles (msec): 00:29:44.956 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:29:44.956 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:29:44.956 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:29:44.956 | 99.00th=[ 29], 99.50th=[ 42], 99.90th=[ 103], 99.95th=[ 103], 00:29:44.956 | 99.99th=[ 103] 00:29:44.956 bw ( KiB/s): min= 2176, max= 2304, per=4.19%, avg=2270.32, stdev=57.91, samples=19 00:29:44.956 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:29:44.956 lat (msec) : 50=99.68%, 100=0.04%, 250=0.28% 00:29:44.956 cpu : usr=98.59%, sys=1.01%, ctx=22, majf=0, minf=9 00:29:44.956 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:29:44.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.956 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.956 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:44.956 filename2: (groupid=0, jobs=1): err= 0: pid=3434812: Mon Jul 15 08:05:28 2024 00:29:44.956 read: IOPS=563, BW=2254KiB/s (2308kB/s)(22.2MiB/10082msec) 00:29:44.956 slat (nsec): min=7111, max=56214, avg=26088.43, stdev=7725.75 00:29:44.956 clat (msec): min=26, max=100, avg=28.17, stdev= 4.29 00:29:44.956 lat (msec): min=26, max=100, avg=28.20, stdev= 4.29 00:29:44.956 clat percentiles (msec): 00:29:44.956 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:29:44.956 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:29:44.956 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:29:44.956 | 99.00th=[ 29], 99.50th=[ 64], 99.90th=[ 101], 99.95th=[ 101], 00:29:44.956 | 99.99th=[ 102] 00:29:44.956 bw ( KiB/s): min= 2048, max= 2304, per=4.19%, avg=2270.32, stdev=71.93, samples=19 00:29:44.956 iops : min= 512, max= 576, avg=567.58, stdev=17.98, samples=19 00:29:44.956 lat (msec) : 50=99.44%, 100=0.28%, 250=0.28% 00:29:44.956 cpu : usr=98.91%, sys=0.70%, ctx=14, majf=0, minf=9 00:29:44.956 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:44.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.956 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.956 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:44.956 filename2: (groupid=0, jobs=1): err= 0: pid=3434813: Mon Jul 15 08:05:28 2024 00:29:44.956 read: IOPS=564, BW=2259KiB/s (2313kB/s)(22.2MiB/10087msec) 00:29:44.956 slat (nsec): min=4883, max=44746, avg=17905.14, stdev=6054.52 00:29:44.956 clat (msec): min=21, max=102, avg=28.19, stdev= 4.03 00:29:44.956 lat (msec): min=21, max=102, avg=28.21, stdev= 4.03 00:29:44.956 clat percentiles (msec): 00:29:44.956 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:29:44.956 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:29:44.956 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:29:44.956 | 99.00th=[ 29], 99.50th=[ 42], 99.90th=[ 103], 99.95th=[ 103], 00:29:44.956 | 99.99th=[ 103] 00:29:44.956 bw ( KiB/s): min= 2176, max= 2304, per=4.19%, avg=2270.32, stdev=57.91, samples=19 00:29:44.956 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:29:44.956 lat (msec) : 50=99.72%, 250=0.28% 00:29:44.956 cpu : usr=98.33%, sys=1.28%, ctx=22, majf=0, minf=9 00:29:44.956 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:44.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.956 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.956 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:44.956 filename2: (groupid=0, jobs=1): err= 0: pid=3434814: Mon Jul 15 08:05:28 2024 00:29:44.956 read: IOPS=564, BW=2259KiB/s (2313kB/s)(22.2MiB/10087msec) 00:29:44.956 slat (nsec): min=7273, max=42124, avg=19983.93, stdev=5950.50 00:29:44.956 clat (msec): min=21, max=102, avg=28.17, stdev= 4.16 00:29:44.956 lat (msec): min=21, max=102, avg=28.19, stdev= 4.16 00:29:44.956 clat percentiles (msec): 00:29:44.956 | 1.00th=[ 23], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:29:44.956 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:29:44.956 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:29:44.956 | 99.00th=[ 34], 99.50th=[ 42], 99.90th=[ 104], 99.95th=[ 104], 00:29:44.956 | 99.99th=[ 104] 00:29:44.956 bw ( KiB/s): min= 2176, max= 2304, per=4.19%, avg=2270.32, stdev=56.16, samples=19 00:29:44.956 iops : min= 544, max= 576, avg=567.58, stdev=14.04, samples=19 00:29:44.956 lat (msec) : 50=99.72%, 250=0.28% 00:29:44.956 cpu : usr=98.73%, sys=0.88%, ctx=14, majf=0, minf=9 00:29:44.956 IO depths : 1=5.2%, 2=11.2%, 4=24.1%, 8=52.2%, 16=7.3%, 32=0.0%, >=64=0.0% 00:29:44.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.957 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.957 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.957 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:44.957 filename2: (groupid=0, jobs=1): err= 0: pid=3434815: Mon Jul 15 08:05:28 2024 00:29:44.957 read: IOPS=564, BW=2256KiB/s (2310kB/s)(22.2MiB/10070msec) 00:29:44.957 slat (nsec): min=5222, max=51733, avg=25569.01, stdev=7585.38 00:29:44.957 clat (msec): min=26, max=100, avg=28.12, stdev= 4.05 00:29:44.957 lat (msec): min=26, max=100, avg=28.15, stdev= 4.05 00:29:44.957 clat percentiles (msec): 00:29:44.957 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:29:44.957 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:29:44.957 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:29:44.957 | 99.00th=[ 29], 99.50th=[ 51], 99.90th=[ 101], 99.95th=[ 102], 00:29:44.957 | 99.99th=[ 102] 00:29:44.957 bw ( KiB/s): min= 2048, max= 2304, per=4.19%, avg=2270.32, stdev=71.93, samples=19 00:29:44.957 iops : min= 512, max= 576, avg=567.58, stdev=17.98, samples=19 00:29:44.957 lat (msec) : 50=99.44%, 100=0.28%, 250=0.28% 00:29:44.957 cpu : usr=98.77%, sys=0.84%, ctx=63, majf=0, minf=9 00:29:44.957 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:44.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.957 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.957 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.957 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:44.957 filename2: (groupid=0, jobs=1): err= 0: pid=3434816: Mon Jul 15 08:05:28 2024 00:29:44.957 read: IOPS=564, BW=2259KiB/s (2313kB/s)(22.2MiB/10087msec) 00:29:44.957 slat (nsec): min=4958, max=42126, avg=20999.38, stdev=5755.17 00:29:44.957 clat (msec): min=22, max=102, avg=28.15, stdev= 4.03 00:29:44.957 lat (msec): min=22, max=102, avg=28.17, stdev= 4.03 00:29:44.957 clat percentiles (msec): 00:29:44.957 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:29:44.957 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:29:44.957 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:29:44.957 | 99.00th=[ 29], 99.50th=[ 42], 99.90th=[ 103], 99.95th=[ 104], 00:29:44.957 | 99.99th=[ 104] 00:29:44.957 bw ( KiB/s): min= 2176, max= 2304, per=4.19%, avg=2270.32, stdev=57.91, samples=19 00:29:44.957 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:29:44.957 lat (msec) : 50=99.72%, 250=0.28% 00:29:44.957 cpu : usr=98.89%, sys=0.70%, ctx=13, majf=0, minf=9 00:29:44.957 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:44.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.957 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.957 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.957 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:44.957 filename2: (groupid=0, jobs=1): err= 0: pid=3434817: Mon Jul 15 08:05:28 2024 00:29:44.957 read: IOPS=563, BW=2255KiB/s (2309kB/s)(22.2MiB/10077msec) 00:29:44.957 slat (nsec): min=4876, max=59495, avg=26474.28, stdev=8389.75 00:29:44.957 clat (msec): min=26, max=101, avg=28.13, stdev= 4.15 00:29:44.957 lat (msec): min=26, max=101, avg=28.16, stdev= 4.15 00:29:44.957 clat percentiles (msec): 00:29:44.957 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:29:44.957 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:29:44.957 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 29], 00:29:44.957 | 99.00th=[ 29], 99.50th=[ 57], 99.90th=[ 101], 99.95th=[ 102], 00:29:44.957 | 99.99th=[ 102] 00:29:44.957 bw ( KiB/s): min= 2043, max= 2304, per=4.19%, avg=2270.05, stdev=72.79, samples=19 00:29:44.957 iops : min= 510, max= 576, avg=567.47, stdev=18.33, samples=19 00:29:44.957 lat (msec) : 50=99.44%, 100=0.28%, 250=0.28% 00:29:44.957 cpu : usr=99.10%, sys=0.51%, ctx=10, majf=0, minf=9 00:29:44.957 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:44.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.957 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.957 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.957 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:44.957 filename2: (groupid=0, jobs=1): err= 0: pid=3434818: Mon Jul 15 08:05:28 2024 00:29:44.957 read: IOPS=573, BW=2293KiB/s (2349kB/s)(22.4MiB/10018msec) 00:29:44.957 slat (nsec): min=7355, max=77481, avg=34162.16, stdev=13921.82 00:29:44.957 clat (usec): min=4587, max=33102, avg=27606.84, stdev=2001.57 00:29:44.957 lat (usec): min=4618, max=33130, avg=27641.00, stdev=2001.80 00:29:44.957 clat percentiles (usec): 00:29:44.957 | 1.00th=[17171], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:29:44.957 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:29:44.957 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:44.957 | 99.00th=[28705], 99.50th=[28967], 99.90th=[32900], 99.95th=[32900], 00:29:44.957 | 99.99th=[33162] 00:29:44.957 bw ( KiB/s): min= 2176, max= 2560, per=4.22%, avg=2291.20, stdev=82.01, samples=20 00:29:44.957 iops : min= 544, max= 640, avg=572.80, stdev=20.50, samples=20 00:29:44.957 lat (msec) : 10=0.84%, 20=0.28%, 50=98.89% 00:29:44.957 cpu : usr=97.93%, sys=1.19%, ctx=126, majf=0, minf=9 00:29:44.957 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:44.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.957 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.957 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.957 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:44.957 filename2: (groupid=0, jobs=1): err= 0: pid=3434819: Mon Jul 15 08:05:28 2024 00:29:44.957 read: IOPS=573, BW=2294KiB/s (2349kB/s)(22.4MiB/10016msec) 00:29:44.957 slat (nsec): min=7140, max=77275, avg=34100.99, stdev=14094.02 00:29:44.957 clat (usec): min=4571, max=33139, avg=27610.77, stdev=2059.61 00:29:44.957 lat (usec): min=4594, max=33186, avg=27644.87, stdev=2059.57 00:29:44.957 clat percentiles (usec): 00:29:44.957 | 1.00th=[16909], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:29:44.957 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:29:44.957 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:44.957 | 99.00th=[28705], 99.50th=[28967], 99.90th=[32900], 99.95th=[32900], 00:29:44.957 | 99.99th=[33162] 00:29:44.957 bw ( KiB/s): min= 2176, max= 2560, per=4.22%, avg=2291.20, stdev=82.01, samples=20 00:29:44.957 iops : min= 544, max= 640, avg=572.80, stdev=20.50, samples=20 00:29:44.957 lat (msec) : 10=0.84%, 20=0.28%, 50=98.89% 00:29:44.957 cpu : usr=98.05%, sys=1.12%, ctx=158, majf=0, minf=9 00:29:44.957 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:44.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.957 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.957 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.957 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:44.957 00:29:44.957 Run status group 0 (all jobs): 00:29:44.957 READ: bw=53.0MiB/s (55.5MB/s), 2251KiB/s-2322KiB/s (2305kB/s-2378kB/s), io=535MiB (561MB), run=10012-10097msec 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.957 bdev_null0 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:44.957 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.958 [2024-07-15 08:05:28.329804] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.958 bdev_null1 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:44.958 { 00:29:44.958 "params": { 00:29:44.958 "name": "Nvme$subsystem", 00:29:44.958 "trtype": "$TEST_TRANSPORT", 00:29:44.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.958 "adrfam": "ipv4", 00:29:44.958 "trsvcid": "$NVMF_PORT", 00:29:44.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.958 "hdgst": ${hdgst:-false}, 00:29:44.958 "ddgst": ${ddgst:-false} 00:29:44.958 }, 00:29:44.958 "method": "bdev_nvme_attach_controller" 00:29:44.958 } 00:29:44.958 EOF 00:29:44.958 )") 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:44.958 { 00:29:44.958 "params": { 00:29:44.958 "name": "Nvme$subsystem", 00:29:44.958 "trtype": "$TEST_TRANSPORT", 00:29:44.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.958 "adrfam": "ipv4", 00:29:44.958 "trsvcid": "$NVMF_PORT", 00:29:44.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.958 "hdgst": ${hdgst:-false}, 00:29:44.958 "ddgst": ${ddgst:-false} 00:29:44.958 }, 00:29:44.958 "method": "bdev_nvme_attach_controller" 00:29:44.958 } 00:29:44.958 EOF 00:29:44.958 )") 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:44.958 "params": { 00:29:44.958 "name": "Nvme0", 00:29:44.958 "trtype": "tcp", 00:29:44.958 "traddr": "10.0.0.2", 00:29:44.958 "adrfam": "ipv4", 00:29:44.958 "trsvcid": "4420", 00:29:44.958 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:44.958 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:44.958 "hdgst": false, 00:29:44.958 "ddgst": false 00:29:44.958 }, 00:29:44.958 "method": "bdev_nvme_attach_controller" 00:29:44.958 },{ 00:29:44.958 "params": { 00:29:44.958 "name": "Nvme1", 00:29:44.958 "trtype": "tcp", 00:29:44.958 "traddr": "10.0.0.2", 00:29:44.958 "adrfam": "ipv4", 00:29:44.958 "trsvcid": "4420", 00:29:44.958 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:44.958 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:44.958 "hdgst": false, 00:29:44.958 "ddgst": false 00:29:44.958 }, 00:29:44.958 "method": "bdev_nvme_attach_controller" 00:29:44.958 }' 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:44.958 08:05:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:44.958 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:44.958 ... 00:29:44.958 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:44.958 ... 00:29:44.958 fio-3.35 00:29:44.958 Starting 4 threads 00:29:44.958 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.231 00:29:50.231 filename0: (groupid=0, jobs=1): err= 0: pid=3436767: Mon Jul 15 08:05:34 2024 00:29:50.231 read: IOPS=2561, BW=20.0MiB/s (21.0MB/s)(100MiB/5001msec) 00:29:50.232 slat (nsec): min=6190, max=38098, avg=8888.64, stdev=2921.60 00:29:50.232 clat (usec): min=1400, max=5385, avg=3097.51, stdev=602.75 00:29:50.232 lat (usec): min=1413, max=5391, avg=3106.40, stdev=602.50 00:29:50.232 clat percentiles (usec): 00:29:50.232 | 1.00th=[ 2024], 5.00th=[ 2343], 10.00th=[ 2540], 20.00th=[ 2704], 00:29:50.232 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 3032], 00:29:50.232 | 70.00th=[ 3163], 80.00th=[ 3359], 90.00th=[ 4080], 95.00th=[ 4555], 00:29:50.232 | 99.00th=[ 4948], 99.50th=[ 5080], 99.90th=[ 5211], 99.95th=[ 5276], 00:29:50.232 | 99.99th=[ 5407] 00:29:50.232 bw ( KiB/s): min=19360, max=21984, per=24.31%, avg=20435.56, stdev=817.95, samples=9 00:29:50.232 iops : min= 2420, max= 2748, avg=2554.44, stdev=102.24, samples=9 00:29:50.232 lat (msec) : 2=0.71%, 4=88.53%, 10=10.76% 00:29:50.232 cpu : usr=95.94%, sys=3.74%, ctx=9, majf=0, minf=92 00:29:50.232 IO depths : 1=0.1%, 2=2.0%, 4=70.3%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:50.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:50.232 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:50.232 issued rwts: total=12811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:50.232 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:50.232 filename0: (groupid=0, jobs=1): err= 0: pid=3436768: Mon Jul 15 08:05:34 2024 00:29:50.232 read: IOPS=2467, BW=19.3MiB/s (20.2MB/s)(96.4MiB/5001msec) 00:29:50.232 slat (nsec): min=6188, max=34267, avg=8723.96, stdev=2956.77 00:29:50.232 clat (usec): min=1345, max=5553, avg=3216.84, stdev=578.23 00:29:50.232 lat (usec): min=1351, max=5560, avg=3225.57, stdev=577.69 00:29:50.232 clat percentiles (usec): 00:29:50.232 | 1.00th=[ 2442], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2802], 00:29:50.232 | 30.00th=[ 2900], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3130], 00:29:50.232 | 70.00th=[ 3261], 80.00th=[ 3458], 90.00th=[ 4228], 95.00th=[ 4621], 00:29:50.232 | 99.00th=[ 5080], 99.50th=[ 5145], 99.90th=[ 5342], 99.95th=[ 5407], 00:29:50.232 | 99.99th=[ 5538] 00:29:50.232 bw ( KiB/s): min=18576, max=20848, per=23.39%, avg=19662.22, stdev=733.32, samples=9 00:29:50.232 iops : min= 2322, max= 2606, avg=2457.78, stdev=91.66, samples=9 00:29:50.232 lat (msec) : 2=0.20%, 4=87.24%, 10=12.56% 00:29:50.232 cpu : usr=96.10%, sys=3.56%, ctx=14, majf=0, minf=99 00:29:50.232 IO depths : 1=0.1%, 2=1.0%, 4=72.3%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:50.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:50.232 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:50.232 issued rwts: total=12339,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:50.232 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:50.232 filename1: (groupid=0, jobs=1): err= 0: pid=3436769: Mon Jul 15 08:05:34 2024 00:29:50.232 read: IOPS=2546, BW=19.9MiB/s (20.9MB/s)(99.5MiB/5001msec) 00:29:50.232 slat (nsec): min=6196, max=34480, avg=8853.63, stdev=2997.89 00:29:50.232 clat (usec): min=1268, max=5740, avg=3115.75, stdev=559.96 00:29:50.232 lat (usec): min=1275, max=5747, avg=3124.60, stdev=559.46 00:29:50.232 clat percentiles (usec): 00:29:50.232 | 1.00th=[ 2057], 5.00th=[ 2442], 10.00th=[ 2573], 20.00th=[ 2737], 00:29:50.232 | 30.00th=[ 2835], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3064], 00:29:50.232 | 70.00th=[ 3195], 80.00th=[ 3359], 90.00th=[ 3949], 95.00th=[ 4490], 00:29:50.232 | 99.00th=[ 4817], 99.50th=[ 5014], 99.90th=[ 5276], 99.95th=[ 5342], 00:29:50.232 | 99.99th=[ 5735] 00:29:50.232 bw ( KiB/s): min=19488, max=20992, per=24.21%, avg=20350.22, stdev=460.57, samples=9 00:29:50.232 iops : min= 2436, max= 2624, avg=2543.78, stdev=57.57, samples=9 00:29:50.232 lat (msec) : 2=0.68%, 4=89.57%, 10=9.76% 00:29:50.232 cpu : usr=96.86%, sys=2.82%, ctx=10, majf=0, minf=55 00:29:50.232 IO depths : 1=0.1%, 2=2.0%, 4=70.6%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:50.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:50.232 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:50.232 issued rwts: total=12736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:50.232 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:50.232 filename1: (groupid=0, jobs=1): err= 0: pid=3436770: Mon Jul 15 08:05:34 2024 00:29:50.232 read: IOPS=2934, BW=22.9MiB/s (24.0MB/s)(115MiB/5002msec) 00:29:50.232 slat (usec): min=6, max=1350, avg= 9.03, stdev=11.46 00:29:50.232 clat (usec): min=787, max=5096, avg=2699.54, stdev=495.33 00:29:50.232 lat (usec): min=806, max=5103, avg=2708.57, stdev=495.29 00:29:50.232 clat percentiles (usec): 00:29:50.232 | 1.00th=[ 1778], 5.00th=[ 2057], 10.00th=[ 2147], 20.00th=[ 2311], 00:29:50.232 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2671], 60.00th=[ 2769], 00:29:50.232 | 70.00th=[ 2868], 80.00th=[ 3032], 90.00th=[ 3228], 95.00th=[ 3687], 00:29:50.232 | 99.00th=[ 4293], 99.50th=[ 4490], 99.90th=[ 4752], 99.95th=[ 4883], 00:29:50.232 | 99.99th=[ 5080] 00:29:50.232 bw ( KiB/s): min=21840, max=26416, per=28.08%, avg=23607.11, stdev=1394.76, samples=9 00:29:50.232 iops : min= 2730, max= 3302, avg=2950.89, stdev=174.34, samples=9 00:29:50.232 lat (usec) : 1000=0.08% 00:29:50.232 lat (msec) : 2=3.65%, 4=93.33%, 10=2.94% 00:29:50.232 cpu : usr=95.92%, sys=3.74%, ctx=8, majf=0, minf=64 00:29:50.232 IO depths : 1=0.2%, 2=7.0%, 4=63.9%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:50.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:50.232 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:50.232 issued rwts: total=14676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:50.232 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:50.232 00:29:50.232 Run status group 0 (all jobs): 00:29:50.232 READ: bw=82.1MiB/s (86.1MB/s), 19.3MiB/s-22.9MiB/s (20.2MB/s-24.0MB/s), io=411MiB (431MB), run=5001-5002msec 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.232 00:29:50.232 real 0m24.226s 00:29:50.232 user 4m53.110s 00:29:50.232 sys 0m4.388s 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:50.232 08:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.232 ************************************ 00:29:50.232 END TEST fio_dif_rand_params 00:29:50.232 ************************************ 00:29:50.232 08:05:34 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:50.232 08:05:34 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:29:50.232 08:05:34 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:50.232 08:05:34 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:50.232 08:05:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:50.232 ************************************ 00:29:50.232 START TEST fio_dif_digest 00:29:50.232 ************************************ 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:50.232 bdev_null0 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.232 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:50.233 [2024-07-15 08:05:34.850096] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:50.233 { 00:29:50.233 "params": { 00:29:50.233 "name": "Nvme$subsystem", 00:29:50.233 "trtype": "$TEST_TRANSPORT", 00:29:50.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.233 "adrfam": "ipv4", 00:29:50.233 "trsvcid": "$NVMF_PORT", 00:29:50.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.233 "hdgst": ${hdgst:-false}, 00:29:50.233 "ddgst": ${ddgst:-false} 00:29:50.233 }, 00:29:50.233 "method": "bdev_nvme_attach_controller" 00:29:50.233 } 00:29:50.233 EOF 00:29:50.233 )") 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:50.233 "params": { 00:29:50.233 "name": "Nvme0", 00:29:50.233 "trtype": "tcp", 00:29:50.233 "traddr": "10.0.0.2", 00:29:50.233 "adrfam": "ipv4", 00:29:50.233 "trsvcid": "4420", 00:29:50.233 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:50.233 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:50.233 "hdgst": true, 00:29:50.233 "ddgst": true 00:29:50.233 }, 00:29:50.233 "method": "bdev_nvme_attach_controller" 00:29:50.233 }' 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:50.233 08:05:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:50.493 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:50.493 ... 00:29:50.493 fio-3.35 00:29:50.493 Starting 3 threads 00:29:50.493 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.728 00:30:02.728 filename0: (groupid=0, jobs=1): err= 0: pid=3438019: Mon Jul 15 08:05:45 2024 00:30:02.728 read: IOPS=293, BW=36.7MiB/s (38.5MB/s)(369MiB/10047msec) 00:30:02.728 slat (usec): min=6, max=188, avg=15.01, stdev= 7.52 00:30:02.728 clat (usec): min=5530, max=50243, avg=10174.61, stdev=1279.12 00:30:02.728 lat (usec): min=5539, max=50254, avg=10189.62, stdev=1279.17 00:30:02.728 clat percentiles (usec): 00:30:02.729 | 1.00th=[ 8291], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9503], 00:30:02.729 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:30:02.729 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11076], 95.00th=[11338], 00:30:02.729 | 99.00th=[11863], 99.50th=[12125], 99.90th=[12649], 99.95th=[49021], 00:30:02.729 | 99.99th=[50070] 00:30:02.729 bw ( KiB/s): min=36608, max=39168, per=34.95%, avg=37772.80, stdev=785.65, samples=20 00:30:02.729 iops : min= 286, max= 306, avg=295.10, stdev= 6.14, samples=20 00:30:02.729 lat (msec) : 10=39.93%, 20=60.01%, 50=0.03%, 100=0.03% 00:30:02.729 cpu : usr=96.31%, sys=3.36%, ctx=25, majf=0, minf=124 00:30:02.729 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:02.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.729 issued rwts: total=2953,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.729 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:02.729 filename0: (groupid=0, jobs=1): err= 0: pid=3438021: Mon Jul 15 08:05:45 2024 00:30:02.729 read: IOPS=278, BW=34.8MiB/s (36.5MB/s)(348MiB/10004msec) 00:30:02.729 slat (nsec): min=6524, max=43492, avg=14577.71, stdev=6546.10 00:30:02.729 clat (usec): min=5747, max=13861, avg=10759.08, stdev=808.47 00:30:02.729 lat (usec): min=5755, max=13874, avg=10773.66, stdev=808.81 00:30:02.729 clat percentiles (usec): 00:30:02.729 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10028], 00:30:02.729 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:30:02.729 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11731], 95.00th=[12125], 00:30:02.729 | 99.00th=[12649], 99.50th=[13042], 99.90th=[13435], 99.95th=[13566], 00:30:02.729 | 99.99th=[13829] 00:30:02.729 bw ( KiB/s): min=34304, max=38400, per=32.99%, avg=35651.37, stdev=1065.27, samples=19 00:30:02.729 iops : min= 268, max= 300, avg=278.53, stdev= 8.32, samples=19 00:30:02.729 lat (msec) : 10=17.02%, 20=82.98% 00:30:02.729 cpu : usr=96.58%, sys=3.11%, ctx=21, majf=0, minf=131 00:30:02.729 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:02.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.729 issued rwts: total=2785,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.729 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:02.729 filename0: (groupid=0, jobs=1): err= 0: pid=3438022: Mon Jul 15 08:05:45 2024 00:30:02.729 read: IOPS=273, BW=34.2MiB/s (35.8MB/s)(343MiB/10047msec) 00:30:02.729 slat (usec): min=6, max=223, avg=15.99, stdev= 7.71 00:30:02.729 clat (usec): min=8219, max=47820, avg=10945.87, stdev=1256.23 00:30:02.729 lat (usec): min=8227, max=47833, avg=10961.86, stdev=1256.42 00:30:02.729 clat percentiles (usec): 00:30:02.729 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:30:02.729 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:30:02.729 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:30:02.729 | 99.00th=[13042], 99.50th=[13304], 99.90th=[14484], 99.95th=[46400], 00:30:02.729 | 99.99th=[47973] 00:30:02.729 bw ( KiB/s): min=34048, max=36352, per=32.49%, avg=35114.00, stdev=753.93, samples=20 00:30:02.729 iops : min= 266, max= 284, avg=274.30, stdev= 5.85, samples=20 00:30:02.729 lat (msec) : 10=11.04%, 20=88.89%, 50=0.07% 00:30:02.729 cpu : usr=95.86%, sys=3.80%, ctx=32, majf=0, minf=115 00:30:02.729 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:02.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.729 issued rwts: total=2745,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.729 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:02.729 00:30:02.729 Run status group 0 (all jobs): 00:30:02.729 READ: bw=106MiB/s (111MB/s), 34.2MiB/s-36.7MiB/s (35.8MB/s-38.5MB/s), io=1060MiB (1112MB), run=10004-10047msec 00:30:02.729 08:05:46 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:02.729 08:05:46 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:30:02.729 08:05:46 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:30:02.729 08:05:46 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:02.729 08:05:46 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:30:02.729 08:05:46 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:02.729 08:05:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.729 08:05:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:02.729 08:05:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.729 08:05:46 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:02.729 08:05:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.729 08:05:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:02.729 08:05:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.729 00:30:02.729 real 0m11.312s 00:30:02.729 user 0m35.756s 00:30:02.729 sys 0m1.366s 00:30:02.729 08:05:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:02.729 08:05:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:02.729 ************************************ 00:30:02.729 END TEST fio_dif_digest 00:30:02.729 ************************************ 00:30:02.729 08:05:46 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:02.729 08:05:46 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:02.729 08:05:46 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:30:02.729 08:05:46 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:02.729 08:05:46 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:30:02.729 08:05:46 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:02.729 08:05:46 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:30:02.729 08:05:46 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:02.729 08:05:46 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:02.729 rmmod nvme_tcp 00:30:02.729 rmmod nvme_fabrics 00:30:02.729 rmmod nvme_keyring 00:30:02.729 08:05:46 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:02.729 08:05:46 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:30:02.729 08:05:46 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:30:02.729 08:05:46 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3429302 ']' 00:30:02.729 08:05:46 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3429302 00:30:02.729 08:05:46 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 3429302 ']' 00:30:02.729 08:05:46 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 3429302 00:30:02.729 08:05:46 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:30:02.729 08:05:46 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:02.729 08:05:46 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3429302 00:30:02.729 08:05:46 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:02.729 08:05:46 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:02.729 08:05:46 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3429302' 00:30:02.729 killing process with pid 3429302 00:30:02.729 08:05:46 nvmf_dif -- common/autotest_common.sh@967 -- # kill 3429302 00:30:02.729 08:05:46 nvmf_dif -- common/autotest_common.sh@972 -- # wait 3429302 00:30:02.729 08:05:46 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:02.729 08:05:46 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:04.637 Waiting for block devices as requested 00:30:04.638 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:04.638 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:04.638 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:04.897 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:04.897 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:04.897 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:04.897 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:05.156 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:05.156 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:05.156 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:05.437 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:05.437 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:05.437 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:05.437 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:05.696 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:05.696 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:05.696 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:05.954 08:05:50 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:05.954 08:05:50 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:05.954 08:05:50 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:05.954 08:05:50 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:05.954 08:05:50 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.954 08:05:50 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:05.954 08:05:50 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.861 08:05:52 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:07.861 00:30:07.861 real 1m14.409s 00:30:07.861 user 7m12.368s 00:30:07.861 sys 0m18.750s 00:30:07.861 08:05:52 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:07.861 08:05:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:07.861 ************************************ 00:30:07.861 END TEST nvmf_dif 00:30:07.861 ************************************ 00:30:07.861 08:05:52 -- common/autotest_common.sh@1142 -- # return 0 00:30:07.861 08:05:52 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:07.861 08:05:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:07.861 08:05:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:07.861 08:05:52 -- common/autotest_common.sh@10 -- # set +x 00:30:07.861 ************************************ 00:30:07.861 START TEST nvmf_abort_qd_sizes 00:30:07.861 ************************************ 00:30:07.861 08:05:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:08.124 * Looking for test storage... 00:30:08.124 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:30:08.124 08:05:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:13.449 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:13.449 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:13.449 Found net devices under 0000:86:00.0: cvl_0_0 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:13.449 Found net devices under 0000:86:00.1: cvl_0_1 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:13.449 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:13.709 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:13.709 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:13.709 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:13.709 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:13.709 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:13.709 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:13.709 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:13.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:13.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:30:13.709 00:30:13.709 --- 10.0.0.2 ping statistics --- 00:30:13.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.710 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:30:13.710 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:13.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:13.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:30:13.710 00:30:13.710 --- 10.0.0.1 ping statistics --- 00:30:13.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.710 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:30:13.710 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:13.710 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:30:13.710 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:13.710 08:05:58 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:17.002 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:17.002 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:17.002 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:17.002 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:17.002 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:17.002 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:17.002 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:17.002 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:17.002 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:17.002 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:17.002 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:17.002 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:17.002 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:17.002 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:17.002 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:17.002 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:17.570 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:30:17.570 08:06:02 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:17.570 08:06:02 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:17.570 08:06:02 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:17.570 08:06:02 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:17.570 08:06:02 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:17.570 08:06:02 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:17.570 08:06:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:17.570 08:06:02 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:17.570 08:06:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:17.570 08:06:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:17.570 08:06:02 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3445950 00:30:17.570 08:06:02 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3445950 00:30:17.570 08:06:02 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:17.570 08:06:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 3445950 ']' 00:30:17.570 08:06:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.570 08:06:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:17.570 08:06:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.570 08:06:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:17.570 08:06:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:17.570 [2024-07-15 08:06:02.275157] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:30:17.570 [2024-07-15 08:06:02.275204] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:17.570 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.829 [2024-07-15 08:06:02.349567] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:17.829 [2024-07-15 08:06:02.431482] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:17.829 [2024-07-15 08:06:02.431520] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:17.829 [2024-07-15 08:06:02.431528] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:17.829 [2024-07-15 08:06:02.431534] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:17.829 [2024-07-15 08:06:02.431539] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:17.829 [2024-07-15 08:06:02.431593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.829 [2024-07-15 08:06:02.431700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:17.829 [2024-07-15 08:06:02.431802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:17.829 [2024-07-15 08:06:02.431802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.398 08:06:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:18.398 08:06:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:30:18.398 08:06:03 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:18.398 08:06:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:18.398 08:06:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:18.398 08:06:03 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:18.398 08:06:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:18.398 08:06:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:18.398 08:06:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:18.398 08:06:03 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:30:18.398 08:06:03 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:30:18.398 08:06:03 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:30:18.398 08:06:03 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:30:18.398 08:06:03 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:18.398 08:06:03 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:30:18.398 08:06:03 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:18.398 08:06:03 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:18.398 08:06:03 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:18.398 08:06:03 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:30:18.398 08:06:03 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:30:18.398 08:06:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:30:18.399 08:06:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:30:18.399 08:06:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:18.399 08:06:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:18.399 08:06:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:18.399 08:06:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:18.656 ************************************ 00:30:18.656 START TEST spdk_target_abort 00:30:18.656 ************************************ 00:30:18.656 08:06:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:30:18.656 08:06:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:18.656 08:06:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:30:18.656 08:06:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.656 08:06:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:21.940 spdk_targetn1 00:30:21.940 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.940 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:21.940 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.940 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:21.940 [2024-07-15 08:06:06.012435] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:21.940 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.940 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:21.940 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.940 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:21.940 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.940 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:21.940 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.940 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:21.940 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.940 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:30:21.940 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.940 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:21.940 [2024-07-15 08:06:06.045340] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:21.940 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.940 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:30:21.940 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:21.940 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:21.940 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:30:21.940 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:21.940 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:21.940 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:21.941 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:21.941 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:21.941 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:21.941 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:21.941 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:21.941 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:21.941 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:21.941 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:30:21.941 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:21.941 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:21.941 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:21.941 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:21.941 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:21.941 08:06:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:21.941 EAL: No free 2048 kB hugepages reported on node 1 00:30:25.228 Initializing NVMe Controllers 00:30:25.228 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:25.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:25.228 Initialization complete. Launching workers. 00:30:25.228 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16102, failed: 0 00:30:25.228 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1278, failed to submit 14824 00:30:25.228 success 779, unsuccess 499, failed 0 00:30:25.228 08:06:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:25.228 08:06:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:25.228 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.516 Initializing NVMe Controllers 00:30:28.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:28.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:28.516 Initialization complete. Launching workers. 00:30:28.516 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8558, failed: 0 00:30:28.516 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1234, failed to submit 7324 00:30:28.516 success 352, unsuccess 882, failed 0 00:30:28.516 08:06:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:28.516 08:06:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:28.516 EAL: No free 2048 kB hugepages reported on node 1 00:30:31.082 Initializing NVMe Controllers 00:30:31.082 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:31.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:31.082 Initialization complete. Launching workers. 00:30:31.082 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38490, failed: 0 00:30:31.082 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2704, failed to submit 35786 00:30:31.082 success 601, unsuccess 2103, failed 0 00:30:31.082 08:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:30:31.082 08:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.082 08:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:31.082 08:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.082 08:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:31.082 08:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.082 08:06:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:32.459 08:06:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.459 08:06:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3445950 00:30:32.459 08:06:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 3445950 ']' 00:30:32.459 08:06:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 3445950 00:30:32.459 08:06:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:30:32.459 08:06:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:32.459 08:06:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3445950 00:30:32.459 08:06:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:32.459 08:06:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:32.459 08:06:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3445950' 00:30:32.459 killing process with pid 3445950 00:30:32.459 08:06:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 3445950 00:30:32.459 08:06:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 3445950 00:30:32.717 00:30:32.717 real 0m14.071s 00:30:32.717 user 0m56.158s 00:30:32.717 sys 0m2.211s 00:30:32.717 08:06:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:32.717 08:06:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:32.717 ************************************ 00:30:32.717 END TEST spdk_target_abort 00:30:32.717 ************************************ 00:30:32.717 08:06:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:30:32.717 08:06:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:30:32.717 08:06:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:32.717 08:06:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:32.717 08:06:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:32.717 ************************************ 00:30:32.717 START TEST kernel_target_abort 00:30:32.717 ************************************ 00:30:32.717 08:06:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:30:32.717 08:06:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:30:32.717 08:06:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:30:32.717 08:06:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:32.717 08:06:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:32.717 08:06:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:32.717 08:06:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:32.717 08:06:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:32.717 08:06:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:32.717 08:06:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:32.717 08:06:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:32.717 08:06:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:32.717 08:06:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:32.717 08:06:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:32.717 08:06:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:30:32.717 08:06:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:32.717 08:06:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:32.717 08:06:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:32.717 08:06:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:30:32.717 08:06:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:30:32.717 08:06:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:30:32.717 08:06:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:32.717 08:06:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:35.255 Waiting for block devices as requested 00:30:35.255 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:35.514 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:35.514 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:35.774 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:35.774 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:35.774 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:35.774 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:36.034 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:36.034 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:36.034 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:36.293 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:36.293 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:36.293 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:36.293 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:36.551 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:36.551 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:36.551 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:36.811 No valid GPT data, bailing 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:30:36.811 00:30:36.811 Discovery Log Number of Records 2, Generation counter 2 00:30:36.811 =====Discovery Log Entry 0====== 00:30:36.811 trtype: tcp 00:30:36.811 adrfam: ipv4 00:30:36.811 subtype: current discovery subsystem 00:30:36.811 treq: not specified, sq flow control disable supported 00:30:36.811 portid: 1 00:30:36.811 trsvcid: 4420 00:30:36.811 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:36.811 traddr: 10.0.0.1 00:30:36.811 eflags: none 00:30:36.811 sectype: none 00:30:36.811 =====Discovery Log Entry 1====== 00:30:36.811 trtype: tcp 00:30:36.811 adrfam: ipv4 00:30:36.811 subtype: nvme subsystem 00:30:36.811 treq: not specified, sq flow control disable supported 00:30:36.811 portid: 1 00:30:36.811 trsvcid: 4420 00:30:36.811 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:36.811 traddr: 10.0.0.1 00:30:36.811 eflags: none 00:30:36.811 sectype: none 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:36.811 08:06:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:36.811 EAL: No free 2048 kB hugepages reported on node 1 00:30:40.099 Initializing NVMe Controllers 00:30:40.099 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:40.099 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:40.099 Initialization complete. Launching workers. 00:30:40.099 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 89292, failed: 0 00:30:40.099 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 89292, failed to submit 0 00:30:40.099 success 0, unsuccess 89292, failed 0 00:30:40.099 08:06:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:40.099 08:06:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:40.099 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.384 Initializing NVMe Controllers 00:30:43.384 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:43.384 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:43.384 Initialization complete. Launching workers. 00:30:43.384 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 142899, failed: 0 00:30:43.384 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35454, failed to submit 107445 00:30:43.384 success 0, unsuccess 35454, failed 0 00:30:43.384 08:06:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:43.384 08:06:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:43.384 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.671 Initializing NVMe Controllers 00:30:46.671 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:46.671 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:46.671 Initialization complete. Launching workers. 00:30:46.671 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 133859, failed: 0 00:30:46.671 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33502, failed to submit 100357 00:30:46.671 success 0, unsuccess 33502, failed 0 00:30:46.671 08:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:46.671 08:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:46.671 08:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:30:46.671 08:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:46.671 08:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:46.671 08:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:46.671 08:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:46.671 08:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:30:46.671 08:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:30:46.671 08:06:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:49.207 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:49.207 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:49.207 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:49.207 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:49.207 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:49.207 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:49.207 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:49.207 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:49.207 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:49.207 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:49.207 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:49.207 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:49.207 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:49.207 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:49.207 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:49.207 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:49.774 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:30:50.031 00:30:50.031 real 0m17.311s 00:30:50.031 user 0m8.767s 00:30:50.031 sys 0m4.995s 00:30:50.031 08:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:50.031 08:06:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:50.031 ************************************ 00:30:50.031 END TEST kernel_target_abort 00:30:50.032 ************************************ 00:30:50.032 08:06:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:30:50.032 08:06:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:50.032 08:06:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:50.032 08:06:34 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:50.032 08:06:34 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:30:50.032 08:06:34 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:50.032 08:06:34 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:30:50.032 08:06:34 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:50.032 08:06:34 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:50.032 rmmod nvme_tcp 00:30:50.032 rmmod nvme_fabrics 00:30:50.032 rmmod nvme_keyring 00:30:50.032 08:06:34 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:50.032 08:06:34 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:30:50.032 08:06:34 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:30:50.032 08:06:34 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3445950 ']' 00:30:50.032 08:06:34 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3445950 00:30:50.032 08:06:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 3445950 ']' 00:30:50.032 08:06:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 3445950 00:30:50.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3445950) - No such process 00:30:50.032 08:06:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 3445950 is not found' 00:30:50.032 Process with pid 3445950 is not found 00:30:50.032 08:06:34 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:50.032 08:06:34 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:52.567 Waiting for block devices as requested 00:30:52.825 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:52.825 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:53.085 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:53.085 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:53.085 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:53.085 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:53.345 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:53.345 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:53.345 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:53.604 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:53.604 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:53.604 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:53.863 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:53.863 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:53.863 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:53.863 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:54.122 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:54.122 08:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:54.122 08:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:54.122 08:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:54.122 08:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:54.122 08:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.122 08:06:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:54.122 08:06:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.660 08:06:40 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:56.660 00:30:56.660 real 0m48.222s 00:30:56.660 user 1m9.186s 00:30:56.660 sys 0m15.705s 00:30:56.660 08:06:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:56.660 08:06:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:56.660 ************************************ 00:30:56.660 END TEST nvmf_abort_qd_sizes 00:30:56.660 ************************************ 00:30:56.660 08:06:40 -- common/autotest_common.sh@1142 -- # return 0 00:30:56.660 08:06:40 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:56.660 08:06:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:56.660 08:06:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:56.660 08:06:40 -- common/autotest_common.sh@10 -- # set +x 00:30:56.660 ************************************ 00:30:56.660 START TEST keyring_file 00:30:56.660 ************************************ 00:30:56.660 08:06:40 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:56.660 * Looking for test storage... 00:30:56.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:56.660 08:06:40 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:56.660 08:06:40 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:56.660 08:06:40 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:30:56.660 08:06:40 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:56.660 08:06:40 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:56.660 08:06:40 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:56.660 08:06:40 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:56.660 08:06:40 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:56.660 08:06:40 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:56.660 08:06:40 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:56.660 08:06:40 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:56.660 08:06:40 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:56.660 08:06:40 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:56.660 08:06:40 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:56.660 08:06:40 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:56.660 08:06:40 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:56.660 08:06:40 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:56.660 08:06:40 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:56.660 08:06:40 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:56.660 08:06:40 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:56.660 08:06:40 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:56.660 08:06:40 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.660 08:06:40 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.660 08:06:40 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.660 08:06:40 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.660 08:06:40 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.660 08:06:41 keyring_file -- paths/export.sh@5 -- # export PATH 00:30:56.660 08:06:41 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.660 08:06:41 keyring_file -- nvmf/common.sh@47 -- # : 0 00:30:56.660 08:06:41 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:56.660 08:06:41 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:56.660 08:06:41 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:56.660 08:06:41 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:56.660 08:06:41 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:56.660 08:06:41 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:56.660 08:06:41 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:56.660 08:06:41 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:56.660 08:06:41 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:56.660 08:06:41 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:56.660 08:06:41 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:56.660 08:06:41 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:30:56.660 08:06:41 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:30:56.660 08:06:41 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:30:56.660 08:06:41 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:56.660 08:06:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:56.660 08:06:41 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:56.660 08:06:41 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:56.660 08:06:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:56.660 08:06:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:56.660 08:06:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.eI0BcFqgtw 00:30:56.660 08:06:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:56.660 08:06:41 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:56.660 08:06:41 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:56.660 08:06:41 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:56.660 08:06:41 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:56.660 08:06:41 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:56.660 08:06:41 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:56.660 08:06:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.eI0BcFqgtw 00:30:56.660 08:06:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.eI0BcFqgtw 00:30:56.660 08:06:41 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.eI0BcFqgtw 00:30:56.660 08:06:41 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:30:56.660 08:06:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:56.660 08:06:41 keyring_file -- keyring/common.sh@17 -- # name=key1 00:30:56.660 08:06:41 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:56.660 08:06:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:56.660 08:06:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:56.660 08:06:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.UwuCf0Jrxn 00:30:56.660 08:06:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:56.660 08:06:41 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:56.660 08:06:41 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:56.660 08:06:41 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:56.660 08:06:41 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:56.660 08:06:41 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:56.660 08:06:41 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:56.660 08:06:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.UwuCf0Jrxn 00:30:56.660 08:06:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.UwuCf0Jrxn 00:30:56.660 08:06:41 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.UwuCf0Jrxn 00:30:56.660 08:06:41 keyring_file -- keyring/file.sh@30 -- # tgtpid=3455115 00:30:56.660 08:06:41 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:56.661 08:06:41 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3455115 00:30:56.661 08:06:41 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3455115 ']' 00:30:56.661 08:06:41 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:56.661 08:06:41 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:56.661 08:06:41 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:56.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:56.661 08:06:41 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:56.661 08:06:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:56.661 [2024-07-15 08:06:41.166594] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:30:56.661 [2024-07-15 08:06:41.166645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3455115 ] 00:30:56.661 EAL: No free 2048 kB hugepages reported on node 1 00:30:56.661 [2024-07-15 08:06:41.214587] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.661 [2024-07-15 08:06:41.287354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.229 08:06:41 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:57.229 08:06:41 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:57.229 08:06:41 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:30:57.229 08:06:41 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.229 08:06:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:57.229 [2024-07-15 08:06:41.976017] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:57.490 null0 00:30:57.490 [2024-07-15 08:06:42.008067] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:57.490 [2024-07-15 08:06:42.008294] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:57.490 [2024-07-15 08:06:42.016082] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:57.490 08:06:42 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.490 08:06:42 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:57.490 08:06:42 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:57.490 08:06:42 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:57.490 08:06:42 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:57.490 08:06:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:57.490 08:06:42 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:57.491 08:06:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:57.491 08:06:42 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:57.491 08:06:42 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.491 08:06:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:57.491 [2024-07-15 08:06:42.028112] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:30:57.491 request: 00:30:57.491 { 00:30:57.491 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:30:57.491 "secure_channel": false, 00:30:57.491 "listen_address": { 00:30:57.491 "trtype": "tcp", 00:30:57.491 "traddr": "127.0.0.1", 00:30:57.491 "trsvcid": "4420" 00:30:57.491 }, 00:30:57.491 "method": "nvmf_subsystem_add_listener", 00:30:57.491 "req_id": 1 00:30:57.491 } 00:30:57.491 Got JSON-RPC error response 00:30:57.491 response: 00:30:57.491 { 00:30:57.491 "code": -32602, 00:30:57.491 "message": "Invalid parameters" 00:30:57.491 } 00:30:57.491 08:06:42 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:57.491 08:06:42 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:57.491 08:06:42 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:57.491 08:06:42 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:57.491 08:06:42 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:57.491 08:06:42 keyring_file -- keyring/file.sh@46 -- # bperfpid=3455218 00:30:57.491 08:06:42 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3455218 /var/tmp/bperf.sock 00:30:57.491 08:06:42 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:30:57.491 08:06:42 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3455218 ']' 00:30:57.491 08:06:42 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:57.491 08:06:42 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:57.491 08:06:42 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:57.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:57.491 08:06:42 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:57.491 08:06:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:57.491 [2024-07-15 08:06:42.081311] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:30:57.491 [2024-07-15 08:06:42.081354] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3455218 ] 00:30:57.491 EAL: No free 2048 kB hugepages reported on node 1 00:30:57.491 [2024-07-15 08:06:42.150317] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.491 [2024-07-15 08:06:42.230067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.461 08:06:42 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:58.461 08:06:42 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:58.461 08:06:42 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eI0BcFqgtw 00:30:58.461 08:06:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eI0BcFqgtw 00:30:58.461 08:06:43 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.UwuCf0Jrxn 00:30:58.461 08:06:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.UwuCf0Jrxn 00:30:58.718 08:06:43 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:30:58.718 08:06:43 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:30:58.718 08:06:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:58.718 08:06:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:58.718 08:06:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:58.718 08:06:43 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.eI0BcFqgtw == \/\t\m\p\/\t\m\p\.\e\I\0\B\c\F\q\g\t\w ]] 00:30:58.718 08:06:43 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:30:58.718 08:06:43 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:30:58.718 08:06:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:58.718 08:06:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:58.718 08:06:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:58.976 08:06:43 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.UwuCf0Jrxn == \/\t\m\p\/\t\m\p\.\U\w\u\C\f\0\J\r\x\n ]] 00:30:58.976 08:06:43 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:30:58.976 08:06:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:58.976 08:06:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:58.976 08:06:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:58.976 08:06:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:58.976 08:06:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:59.233 08:06:43 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:30:59.233 08:06:43 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:30:59.233 08:06:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:59.233 08:06:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:59.233 08:06:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:59.233 08:06:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:59.233 08:06:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:59.491 08:06:43 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:59.491 08:06:44 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:59.491 08:06:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:59.491 [2024-07-15 08:06:44.165024] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:59.491 nvme0n1 00:30:59.748 08:06:44 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:30:59.748 08:06:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:59.748 08:06:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:59.748 08:06:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:59.748 08:06:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:59.748 08:06:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:59.748 08:06:44 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:30:59.748 08:06:44 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:30:59.748 08:06:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:59.748 08:06:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:59.748 08:06:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:59.748 08:06:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:59.748 08:06:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:00.006 08:06:44 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:31:00.006 08:06:44 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:00.006 Running I/O for 1 seconds... 00:31:01.382 00:31:01.382 Latency(us) 00:31:01.382 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.382 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:31:01.382 nvme0n1 : 1.00 17082.96 66.73 0.00 0.00 7474.05 3647.22 11739.49 00:31:01.382 =================================================================================================================== 00:31:01.382 Total : 17082.96 66.73 0.00 0.00 7474.05 3647.22 11739.49 00:31:01.382 0 00:31:01.382 08:06:45 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:01.382 08:06:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:01.382 08:06:45 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:31:01.382 08:06:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:01.382 08:06:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:01.382 08:06:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:01.382 08:06:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:01.382 08:06:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:01.382 08:06:46 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:31:01.382 08:06:46 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:31:01.382 08:06:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:01.382 08:06:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:01.382 08:06:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:01.382 08:06:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:01.382 08:06:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:01.641 08:06:46 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:31:01.641 08:06:46 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:01.641 08:06:46 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:01.641 08:06:46 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:01.641 08:06:46 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:01.641 08:06:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:01.641 08:06:46 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:01.641 08:06:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:01.641 08:06:46 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:01.641 08:06:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:01.901 [2024-07-15 08:06:46.441967] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:01.901 [2024-07-15 08:06:46.442658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f0a820 (107): Transport endpoint is not connected 00:31:01.901 [2024-07-15 08:06:46.443652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f0a820 (9): Bad file descriptor 00:31:01.901 [2024-07-15 08:06:46.444653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:01.901 [2024-07-15 08:06:46.444665] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:01.901 [2024-07-15 08:06:46.444672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:01.901 request: 00:31:01.901 { 00:31:01.901 "name": "nvme0", 00:31:01.901 "trtype": "tcp", 00:31:01.901 "traddr": "127.0.0.1", 00:31:01.901 "adrfam": "ipv4", 00:31:01.901 "trsvcid": "4420", 00:31:01.901 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:01.901 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:01.901 "prchk_reftag": false, 00:31:01.901 "prchk_guard": false, 00:31:01.901 "hdgst": false, 00:31:01.901 "ddgst": false, 00:31:01.901 "psk": "key1", 00:31:01.901 "method": "bdev_nvme_attach_controller", 00:31:01.901 "req_id": 1 00:31:01.901 } 00:31:01.901 Got JSON-RPC error response 00:31:01.901 response: 00:31:01.901 { 00:31:01.901 "code": -5, 00:31:01.901 "message": "Input/output error" 00:31:01.901 } 00:31:01.901 08:06:46 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:01.901 08:06:46 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:01.901 08:06:46 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:01.901 08:06:46 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:01.901 08:06:46 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:31:01.901 08:06:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:01.901 08:06:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:01.901 08:06:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:01.901 08:06:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:01.901 08:06:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:01.901 08:06:46 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:31:01.901 08:06:46 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:31:01.901 08:06:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:01.901 08:06:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:01.901 08:06:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:01.901 08:06:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:01.901 08:06:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:02.160 08:06:46 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:31:02.160 08:06:46 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:31:02.160 08:06:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:02.419 08:06:47 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:31:02.419 08:06:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:31:02.678 08:06:47 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:31:02.678 08:06:47 keyring_file -- keyring/file.sh@77 -- # jq length 00:31:02.678 08:06:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:02.678 08:06:47 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:31:02.678 08:06:47 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.eI0BcFqgtw 00:31:02.678 08:06:47 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.eI0BcFqgtw 00:31:02.678 08:06:47 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:02.678 08:06:47 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.eI0BcFqgtw 00:31:02.678 08:06:47 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:02.678 08:06:47 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:02.678 08:06:47 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:02.678 08:06:47 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:02.678 08:06:47 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eI0BcFqgtw 00:31:02.678 08:06:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eI0BcFqgtw 00:31:02.937 [2024-07-15 08:06:47.534822] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.eI0BcFqgtw': 0100660 00:31:02.937 [2024-07-15 08:06:47.534845] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:31:02.937 request: 00:31:02.937 { 00:31:02.937 "name": "key0", 00:31:02.937 "path": "/tmp/tmp.eI0BcFqgtw", 00:31:02.937 "method": "keyring_file_add_key", 00:31:02.937 "req_id": 1 00:31:02.937 } 00:31:02.937 Got JSON-RPC error response 00:31:02.937 response: 00:31:02.937 { 00:31:02.937 "code": -1, 00:31:02.937 "message": "Operation not permitted" 00:31:02.937 } 00:31:02.937 08:06:47 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:02.937 08:06:47 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:02.937 08:06:47 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:02.937 08:06:47 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:02.937 08:06:47 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.eI0BcFqgtw 00:31:02.937 08:06:47 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eI0BcFqgtw 00:31:02.937 08:06:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eI0BcFqgtw 00:31:03.195 08:06:47 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.eI0BcFqgtw 00:31:03.195 08:06:47 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:31:03.195 08:06:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:03.195 08:06:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:03.195 08:06:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:03.195 08:06:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:03.195 08:06:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:03.453 08:06:47 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:31:03.453 08:06:47 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:03.453 08:06:47 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:03.453 08:06:47 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:03.453 08:06:47 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:03.453 08:06:47 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:03.453 08:06:47 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:03.453 08:06:47 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:03.454 08:06:47 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:03.454 08:06:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:03.454 [2024-07-15 08:06:48.116365] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.eI0BcFqgtw': No such file or directory 00:31:03.454 [2024-07-15 08:06:48.116386] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:31:03.454 [2024-07-15 08:06:48.116407] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:31:03.454 [2024-07-15 08:06:48.116414] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:03.454 [2024-07-15 08:06:48.116420] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:31:03.454 request: 00:31:03.454 { 00:31:03.454 "name": "nvme0", 00:31:03.454 "trtype": "tcp", 00:31:03.454 "traddr": "127.0.0.1", 00:31:03.454 "adrfam": "ipv4", 00:31:03.454 "trsvcid": "4420", 00:31:03.454 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:03.454 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:03.454 "prchk_reftag": false, 00:31:03.454 "prchk_guard": false, 00:31:03.454 "hdgst": false, 00:31:03.454 "ddgst": false, 00:31:03.454 "psk": "key0", 00:31:03.454 "method": "bdev_nvme_attach_controller", 00:31:03.454 "req_id": 1 00:31:03.454 } 00:31:03.454 Got JSON-RPC error response 00:31:03.454 response: 00:31:03.454 { 00:31:03.454 "code": -19, 00:31:03.454 "message": "No such device" 00:31:03.454 } 00:31:03.454 08:06:48 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:03.454 08:06:48 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:03.454 08:06:48 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:03.454 08:06:48 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:03.454 08:06:48 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:31:03.454 08:06:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:03.712 08:06:48 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:03.712 08:06:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:03.712 08:06:48 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:03.712 08:06:48 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:03.712 08:06:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:03.712 08:06:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:03.712 08:06:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.slcj8uAhMA 00:31:03.712 08:06:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:03.712 08:06:48 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:03.712 08:06:48 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:03.712 08:06:48 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:03.712 08:06:48 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:03.712 08:06:48 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:03.712 08:06:48 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:03.712 08:06:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.slcj8uAhMA 00:31:03.712 08:06:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.slcj8uAhMA 00:31:03.712 08:06:48 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.slcj8uAhMA 00:31:03.712 08:06:48 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.slcj8uAhMA 00:31:03.712 08:06:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.slcj8uAhMA 00:31:03.970 08:06:48 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:03.970 08:06:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:04.228 nvme0n1 00:31:04.228 08:06:48 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:31:04.228 08:06:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:04.228 08:06:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:04.228 08:06:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:04.228 08:06:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:04.228 08:06:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:04.228 08:06:48 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:31:04.228 08:06:48 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:31:04.228 08:06:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:04.488 08:06:49 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:31:04.488 08:06:49 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:31:04.488 08:06:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:04.488 08:06:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:04.488 08:06:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:04.746 08:06:49 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:31:04.746 08:06:49 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:31:04.746 08:06:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:04.746 08:06:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:04.746 08:06:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:04.746 08:06:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:04.746 08:06:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:04.746 08:06:49 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:31:04.746 08:06:49 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:04.746 08:06:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:05.003 08:06:49 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:31:05.003 08:06:49 keyring_file -- keyring/file.sh@104 -- # jq length 00:31:05.003 08:06:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:05.261 08:06:49 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:31:05.261 08:06:49 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.slcj8uAhMA 00:31:05.261 08:06:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.slcj8uAhMA 00:31:05.518 08:06:50 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.UwuCf0Jrxn 00:31:05.518 08:06:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.UwuCf0Jrxn 00:31:05.518 08:06:50 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:05.518 08:06:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:05.776 nvme0n1 00:31:05.776 08:06:50 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:31:05.776 08:06:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:31:06.035 08:06:50 keyring_file -- keyring/file.sh@112 -- # config='{ 00:31:06.035 "subsystems": [ 00:31:06.035 { 00:31:06.035 "subsystem": "keyring", 00:31:06.035 "config": [ 00:31:06.035 { 00:31:06.035 "method": "keyring_file_add_key", 00:31:06.035 "params": { 00:31:06.035 "name": "key0", 00:31:06.035 "path": "/tmp/tmp.slcj8uAhMA" 00:31:06.035 } 00:31:06.035 }, 00:31:06.035 { 00:31:06.035 "method": "keyring_file_add_key", 00:31:06.035 "params": { 00:31:06.035 "name": "key1", 00:31:06.035 "path": "/tmp/tmp.UwuCf0Jrxn" 00:31:06.035 } 00:31:06.035 } 00:31:06.035 ] 00:31:06.035 }, 00:31:06.035 { 00:31:06.035 "subsystem": "iobuf", 00:31:06.035 "config": [ 00:31:06.035 { 00:31:06.035 "method": "iobuf_set_options", 00:31:06.035 "params": { 00:31:06.035 "small_pool_count": 8192, 00:31:06.035 "large_pool_count": 1024, 00:31:06.035 "small_bufsize": 8192, 00:31:06.035 "large_bufsize": 135168 00:31:06.035 } 00:31:06.035 } 00:31:06.035 ] 00:31:06.035 }, 00:31:06.035 { 00:31:06.035 "subsystem": "sock", 00:31:06.035 "config": [ 00:31:06.035 { 00:31:06.035 "method": "sock_set_default_impl", 00:31:06.035 "params": { 00:31:06.035 "impl_name": "posix" 00:31:06.035 } 00:31:06.035 }, 00:31:06.035 { 00:31:06.035 "method": "sock_impl_set_options", 00:31:06.035 "params": { 00:31:06.035 "impl_name": "ssl", 00:31:06.035 "recv_buf_size": 4096, 00:31:06.035 "send_buf_size": 4096, 00:31:06.035 "enable_recv_pipe": true, 00:31:06.035 "enable_quickack": false, 00:31:06.035 "enable_placement_id": 0, 00:31:06.035 "enable_zerocopy_send_server": true, 00:31:06.035 "enable_zerocopy_send_client": false, 00:31:06.035 "zerocopy_threshold": 0, 00:31:06.035 "tls_version": 0, 00:31:06.035 "enable_ktls": false 00:31:06.035 } 00:31:06.035 }, 00:31:06.035 { 00:31:06.035 "method": "sock_impl_set_options", 00:31:06.035 "params": { 00:31:06.035 "impl_name": "posix", 00:31:06.035 "recv_buf_size": 2097152, 00:31:06.035 "send_buf_size": 2097152, 00:31:06.035 "enable_recv_pipe": true, 00:31:06.035 "enable_quickack": false, 00:31:06.035 "enable_placement_id": 0, 00:31:06.035 "enable_zerocopy_send_server": true, 00:31:06.035 "enable_zerocopy_send_client": false, 00:31:06.035 "zerocopy_threshold": 0, 00:31:06.035 "tls_version": 0, 00:31:06.035 "enable_ktls": false 00:31:06.035 } 00:31:06.035 } 00:31:06.035 ] 00:31:06.035 }, 00:31:06.035 { 00:31:06.035 "subsystem": "vmd", 00:31:06.035 "config": [] 00:31:06.035 }, 00:31:06.035 { 00:31:06.035 "subsystem": "accel", 00:31:06.035 "config": [ 00:31:06.035 { 00:31:06.035 "method": "accel_set_options", 00:31:06.035 "params": { 00:31:06.035 "small_cache_size": 128, 00:31:06.035 "large_cache_size": 16, 00:31:06.035 "task_count": 2048, 00:31:06.035 "sequence_count": 2048, 00:31:06.035 "buf_count": 2048 00:31:06.035 } 00:31:06.035 } 00:31:06.035 ] 00:31:06.035 }, 00:31:06.035 { 00:31:06.035 "subsystem": "bdev", 00:31:06.035 "config": [ 00:31:06.035 { 00:31:06.035 "method": "bdev_set_options", 00:31:06.035 "params": { 00:31:06.035 "bdev_io_pool_size": 65535, 00:31:06.035 "bdev_io_cache_size": 256, 00:31:06.035 "bdev_auto_examine": true, 00:31:06.035 "iobuf_small_cache_size": 128, 00:31:06.035 "iobuf_large_cache_size": 16 00:31:06.035 } 00:31:06.035 }, 00:31:06.035 { 00:31:06.035 "method": "bdev_raid_set_options", 00:31:06.035 "params": { 00:31:06.035 "process_window_size_kb": 1024 00:31:06.035 } 00:31:06.035 }, 00:31:06.035 { 00:31:06.035 "method": "bdev_iscsi_set_options", 00:31:06.035 "params": { 00:31:06.035 "timeout_sec": 30 00:31:06.035 } 00:31:06.035 }, 00:31:06.035 { 00:31:06.035 "method": "bdev_nvme_set_options", 00:31:06.035 "params": { 00:31:06.035 "action_on_timeout": "none", 00:31:06.035 "timeout_us": 0, 00:31:06.035 "timeout_admin_us": 0, 00:31:06.035 "keep_alive_timeout_ms": 10000, 00:31:06.035 "arbitration_burst": 0, 00:31:06.035 "low_priority_weight": 0, 00:31:06.035 "medium_priority_weight": 0, 00:31:06.035 "high_priority_weight": 0, 00:31:06.035 "nvme_adminq_poll_period_us": 10000, 00:31:06.035 "nvme_ioq_poll_period_us": 0, 00:31:06.035 "io_queue_requests": 512, 00:31:06.035 "delay_cmd_submit": true, 00:31:06.035 "transport_retry_count": 4, 00:31:06.035 "bdev_retry_count": 3, 00:31:06.035 "transport_ack_timeout": 0, 00:31:06.035 "ctrlr_loss_timeout_sec": 0, 00:31:06.035 "reconnect_delay_sec": 0, 00:31:06.035 "fast_io_fail_timeout_sec": 0, 00:31:06.035 "disable_auto_failback": false, 00:31:06.035 "generate_uuids": false, 00:31:06.035 "transport_tos": 0, 00:31:06.035 "nvme_error_stat": false, 00:31:06.035 "rdma_srq_size": 0, 00:31:06.035 "io_path_stat": false, 00:31:06.035 "allow_accel_sequence": false, 00:31:06.035 "rdma_max_cq_size": 0, 00:31:06.035 "rdma_cm_event_timeout_ms": 0, 00:31:06.035 "dhchap_digests": [ 00:31:06.035 "sha256", 00:31:06.035 "sha384", 00:31:06.035 "sha512" 00:31:06.035 ], 00:31:06.035 "dhchap_dhgroups": [ 00:31:06.035 "null", 00:31:06.035 "ffdhe2048", 00:31:06.035 "ffdhe3072", 00:31:06.035 "ffdhe4096", 00:31:06.035 "ffdhe6144", 00:31:06.035 "ffdhe8192" 00:31:06.035 ] 00:31:06.035 } 00:31:06.035 }, 00:31:06.035 { 00:31:06.035 "method": "bdev_nvme_attach_controller", 00:31:06.035 "params": { 00:31:06.035 "name": "nvme0", 00:31:06.035 "trtype": "TCP", 00:31:06.035 "adrfam": "IPv4", 00:31:06.035 "traddr": "127.0.0.1", 00:31:06.035 "trsvcid": "4420", 00:31:06.035 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:06.035 "prchk_reftag": false, 00:31:06.035 "prchk_guard": false, 00:31:06.035 "ctrlr_loss_timeout_sec": 0, 00:31:06.035 "reconnect_delay_sec": 0, 00:31:06.035 "fast_io_fail_timeout_sec": 0, 00:31:06.035 "psk": "key0", 00:31:06.035 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:06.035 "hdgst": false, 00:31:06.035 "ddgst": false 00:31:06.035 } 00:31:06.035 }, 00:31:06.035 { 00:31:06.035 "method": "bdev_nvme_set_hotplug", 00:31:06.035 "params": { 00:31:06.035 "period_us": 100000, 00:31:06.035 "enable": false 00:31:06.035 } 00:31:06.035 }, 00:31:06.035 { 00:31:06.035 "method": "bdev_wait_for_examine" 00:31:06.035 } 00:31:06.036 ] 00:31:06.036 }, 00:31:06.036 { 00:31:06.036 "subsystem": "nbd", 00:31:06.036 "config": [] 00:31:06.036 } 00:31:06.036 ] 00:31:06.036 }' 00:31:06.036 08:06:50 keyring_file -- keyring/file.sh@114 -- # killprocess 3455218 00:31:06.036 08:06:50 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3455218 ']' 00:31:06.036 08:06:50 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3455218 00:31:06.036 08:06:50 keyring_file -- common/autotest_common.sh@953 -- # uname 00:31:06.036 08:06:50 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:06.036 08:06:50 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3455218 00:31:06.036 08:06:50 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:06.036 08:06:50 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:06.036 08:06:50 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3455218' 00:31:06.036 killing process with pid 3455218 00:31:06.036 08:06:50 keyring_file -- common/autotest_common.sh@967 -- # kill 3455218 00:31:06.036 Received shutdown signal, test time was about 1.000000 seconds 00:31:06.036 00:31:06.036 Latency(us) 00:31:06.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:06.036 =================================================================================================================== 00:31:06.036 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:06.036 08:06:50 keyring_file -- common/autotest_common.sh@972 -- # wait 3455218 00:31:06.295 08:06:50 keyring_file -- keyring/file.sh@117 -- # bperfpid=3456842 00:31:06.295 08:06:50 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3456842 /var/tmp/bperf.sock 00:31:06.295 08:06:50 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3456842 ']' 00:31:06.295 08:06:50 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:06.295 08:06:50 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:31:06.295 08:06:50 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:06.295 08:06:50 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:06.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:06.295 08:06:50 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:31:06.295 "subsystems": [ 00:31:06.295 { 00:31:06.295 "subsystem": "keyring", 00:31:06.295 "config": [ 00:31:06.295 { 00:31:06.295 "method": "keyring_file_add_key", 00:31:06.295 "params": { 00:31:06.295 "name": "key0", 00:31:06.295 "path": "/tmp/tmp.slcj8uAhMA" 00:31:06.295 } 00:31:06.295 }, 00:31:06.295 { 00:31:06.295 "method": "keyring_file_add_key", 00:31:06.295 "params": { 00:31:06.295 "name": "key1", 00:31:06.295 "path": "/tmp/tmp.UwuCf0Jrxn" 00:31:06.295 } 00:31:06.295 } 00:31:06.295 ] 00:31:06.295 }, 00:31:06.295 { 00:31:06.295 "subsystem": "iobuf", 00:31:06.295 "config": [ 00:31:06.295 { 00:31:06.295 "method": "iobuf_set_options", 00:31:06.295 "params": { 00:31:06.295 "small_pool_count": 8192, 00:31:06.295 "large_pool_count": 1024, 00:31:06.295 "small_bufsize": 8192, 00:31:06.295 "large_bufsize": 135168 00:31:06.295 } 00:31:06.295 } 00:31:06.295 ] 00:31:06.295 }, 00:31:06.295 { 00:31:06.295 "subsystem": "sock", 00:31:06.295 "config": [ 00:31:06.295 { 00:31:06.295 "method": "sock_set_default_impl", 00:31:06.295 "params": { 00:31:06.295 "impl_name": "posix" 00:31:06.295 } 00:31:06.295 }, 00:31:06.295 { 00:31:06.295 "method": "sock_impl_set_options", 00:31:06.295 "params": { 00:31:06.295 "impl_name": "ssl", 00:31:06.295 "recv_buf_size": 4096, 00:31:06.295 "send_buf_size": 4096, 00:31:06.295 "enable_recv_pipe": true, 00:31:06.295 "enable_quickack": false, 00:31:06.295 "enable_placement_id": 0, 00:31:06.295 "enable_zerocopy_send_server": true, 00:31:06.295 "enable_zerocopy_send_client": false, 00:31:06.295 "zerocopy_threshold": 0, 00:31:06.295 "tls_version": 0, 00:31:06.295 "enable_ktls": false 00:31:06.295 } 00:31:06.295 }, 00:31:06.295 { 00:31:06.295 "method": "sock_impl_set_options", 00:31:06.295 "params": { 00:31:06.295 "impl_name": "posix", 00:31:06.295 "recv_buf_size": 2097152, 00:31:06.295 "send_buf_size": 2097152, 00:31:06.295 "enable_recv_pipe": true, 00:31:06.295 "enable_quickack": false, 00:31:06.295 "enable_placement_id": 0, 00:31:06.295 "enable_zerocopy_send_server": true, 00:31:06.295 "enable_zerocopy_send_client": false, 00:31:06.295 "zerocopy_threshold": 0, 00:31:06.295 "tls_version": 0, 00:31:06.295 "enable_ktls": false 00:31:06.295 } 00:31:06.295 } 00:31:06.295 ] 00:31:06.295 }, 00:31:06.295 { 00:31:06.295 "subsystem": "vmd", 00:31:06.295 "config": [] 00:31:06.295 }, 00:31:06.295 { 00:31:06.295 "subsystem": "accel", 00:31:06.295 "config": [ 00:31:06.295 { 00:31:06.295 "method": "accel_set_options", 00:31:06.295 "params": { 00:31:06.295 "small_cache_size": 128, 00:31:06.295 "large_cache_size": 16, 00:31:06.295 "task_count": 2048, 00:31:06.295 "sequence_count": 2048, 00:31:06.295 "buf_count": 2048 00:31:06.295 } 00:31:06.295 } 00:31:06.295 ] 00:31:06.295 }, 00:31:06.295 { 00:31:06.295 "subsystem": "bdev", 00:31:06.295 "config": [ 00:31:06.295 { 00:31:06.295 "method": "bdev_set_options", 00:31:06.295 "params": { 00:31:06.295 "bdev_io_pool_size": 65535, 00:31:06.295 "bdev_io_cache_size": 256, 00:31:06.295 "bdev_auto_examine": true, 00:31:06.295 "iobuf_small_cache_size": 128, 00:31:06.295 "iobuf_large_cache_size": 16 00:31:06.295 } 00:31:06.295 }, 00:31:06.295 { 00:31:06.295 "method": "bdev_raid_set_options", 00:31:06.296 "params": { 00:31:06.296 "process_window_size_kb": 1024 00:31:06.296 } 00:31:06.296 }, 00:31:06.296 { 00:31:06.296 "method": "bdev_iscsi_set_options", 00:31:06.296 "params": { 00:31:06.296 "timeout_sec": 30 00:31:06.296 } 00:31:06.296 }, 00:31:06.296 { 00:31:06.296 "method": "bdev_nvme_set_options", 00:31:06.296 "params": { 00:31:06.296 "action_on_timeout": "none", 00:31:06.296 "timeout_us": 0, 00:31:06.296 "timeout_admin_us": 0, 00:31:06.296 "keep_alive_timeout_ms": 10000, 00:31:06.296 "arbitration_burst": 0, 00:31:06.296 "low_priority_weight": 0, 00:31:06.296 "medium_priority_weight": 0, 00:31:06.296 "high_priority_weight": 0, 00:31:06.296 "nvme_adminq_poll_period_us": 10000, 00:31:06.296 "nvme_ioq_poll_period_us": 0, 00:31:06.296 "io_queue_requests": 512, 00:31:06.296 "delay_cmd_submit": true, 00:31:06.296 "transport_retry_count": 4, 00:31:06.296 "bdev_retry_count": 3, 00:31:06.296 "transport_ack_timeout": 0, 00:31:06.296 "ctrlr_loss_timeout_sec": 0, 00:31:06.296 "reconnect_delay_sec": 0, 00:31:06.296 "fast_io_fail_timeout_sec": 0, 00:31:06.296 "disable_auto_failback": false, 00:31:06.296 "generate_uuids": false, 00:31:06.296 "transport_tos": 0, 00:31:06.296 "nvme_error_stat": false, 00:31:06.296 "rdma_srq_size": 0, 00:31:06.296 "io_path_stat": false, 00:31:06.296 "allow_accel_sequence": false, 00:31:06.296 "rdma_max_cq_size": 0, 00:31:06.296 "rdma_cm_event_timeout_ms": 0, 00:31:06.296 "dhchap_digests": [ 00:31:06.296 "sha256", 00:31:06.296 "sha384", 00:31:06.296 "sha512" 00:31:06.296 ], 00:31:06.296 "dhchap_dhgroups": [ 00:31:06.296 "null", 00:31:06.296 "ffdhe2048", 00:31:06.296 "ffdhe3072", 00:31:06.296 "ffdhe4096", 00:31:06.296 "ffdhe6144", 00:31:06.296 "ffdhe8192" 00:31:06.296 ] 00:31:06.296 } 00:31:06.296 }, 00:31:06.296 { 00:31:06.296 "method": "bdev_nvme_attach_controller", 00:31:06.296 "params": { 00:31:06.296 "name": "nvme0", 00:31:06.296 "trtype": "TCP", 00:31:06.296 "adrfam": "IPv4", 00:31:06.296 "traddr": "127.0.0.1", 00:31:06.296 "trsvcid": "4420", 00:31:06.296 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:06.296 "prchk_reftag": false, 00:31:06.296 "prchk_guard": false, 00:31:06.296 "ctrlr_loss_timeout_sec": 0, 00:31:06.296 "reconnect_delay_sec": 0, 00:31:06.296 "fast_io_fail_timeout_sec": 0, 00:31:06.296 "psk": "key0", 00:31:06.296 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:06.296 "hdgst": false, 00:31:06.296 "ddgst": false 00:31:06.296 } 00:31:06.296 }, 00:31:06.296 { 00:31:06.296 "method": "bdev_nvme_set_hotplug", 00:31:06.296 "params": { 00:31:06.296 "period_us": 100000, 00:31:06.296 "enable": false 00:31:06.296 } 00:31:06.296 }, 00:31:06.296 { 00:31:06.296 "method": "bdev_wait_for_examine" 00:31:06.296 } 00:31:06.296 ] 00:31:06.296 }, 00:31:06.296 { 00:31:06.296 "subsystem": "nbd", 00:31:06.296 "config": [] 00:31:06.296 } 00:31:06.296 ] 00:31:06.296 }' 00:31:06.296 08:06:50 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:06.296 08:06:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:06.296 [2024-07-15 08:06:50.985102] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:31:06.296 [2024-07-15 08:06:50.985151] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3456842 ] 00:31:06.296 EAL: No free 2048 kB hugepages reported on node 1 00:31:06.555 [2024-07-15 08:06:51.051188] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.555 [2024-07-15 08:06:51.130933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.555 [2024-07-15 08:06:51.290297] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:07.121 08:06:51 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:07.121 08:06:51 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:31:07.121 08:06:51 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:31:07.121 08:06:51 keyring_file -- keyring/file.sh@120 -- # jq length 00:31:07.121 08:06:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:07.378 08:06:51 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:31:07.378 08:06:51 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:31:07.378 08:06:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:07.378 08:06:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:07.378 08:06:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:07.378 08:06:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:07.378 08:06:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:07.634 08:06:52 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:31:07.634 08:06:52 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:31:07.634 08:06:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:07.634 08:06:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:07.634 08:06:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:07.634 08:06:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:07.634 08:06:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:07.634 08:06:52 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:31:07.634 08:06:52 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:31:07.634 08:06:52 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:31:07.634 08:06:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:31:07.891 08:06:52 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:31:07.891 08:06:52 keyring_file -- keyring/file.sh@1 -- # cleanup 00:31:07.891 08:06:52 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.slcj8uAhMA /tmp/tmp.UwuCf0Jrxn 00:31:07.891 08:06:52 keyring_file -- keyring/file.sh@20 -- # killprocess 3456842 00:31:07.891 08:06:52 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3456842 ']' 00:31:07.891 08:06:52 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3456842 00:31:07.891 08:06:52 keyring_file -- common/autotest_common.sh@953 -- # uname 00:31:07.891 08:06:52 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:07.891 08:06:52 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3456842 00:31:07.891 08:06:52 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:07.891 08:06:52 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:07.891 08:06:52 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3456842' 00:31:07.891 killing process with pid 3456842 00:31:07.891 08:06:52 keyring_file -- common/autotest_common.sh@967 -- # kill 3456842 00:31:07.891 Received shutdown signal, test time was about 1.000000 seconds 00:31:07.891 00:31:07.892 Latency(us) 00:31:07.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:07.892 =================================================================================================================== 00:31:07.892 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:07.892 08:06:52 keyring_file -- common/autotest_common.sh@972 -- # wait 3456842 00:31:08.149 08:06:52 keyring_file -- keyring/file.sh@21 -- # killprocess 3455115 00:31:08.149 08:06:52 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3455115 ']' 00:31:08.149 08:06:52 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3455115 00:31:08.149 08:06:52 keyring_file -- common/autotest_common.sh@953 -- # uname 00:31:08.149 08:06:52 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:08.149 08:06:52 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3455115 00:31:08.149 08:06:52 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:08.149 08:06:52 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:08.149 08:06:52 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3455115' 00:31:08.149 killing process with pid 3455115 00:31:08.149 08:06:52 keyring_file -- common/autotest_common.sh@967 -- # kill 3455115 00:31:08.149 [2024-07-15 08:06:52.801158] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:08.149 08:06:52 keyring_file -- common/autotest_common.sh@972 -- # wait 3455115 00:31:08.407 00:31:08.407 real 0m12.218s 00:31:08.407 user 0m29.291s 00:31:08.407 sys 0m2.764s 00:31:08.407 08:06:53 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:08.407 08:06:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:08.407 ************************************ 00:31:08.407 END TEST keyring_file 00:31:08.407 ************************************ 00:31:08.407 08:06:53 -- common/autotest_common.sh@1142 -- # return 0 00:31:08.407 08:06:53 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:31:08.407 08:06:53 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:31:08.407 08:06:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:08.407 08:06:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:08.407 08:06:53 -- common/autotest_common.sh@10 -- # set +x 00:31:08.665 ************************************ 00:31:08.665 START TEST keyring_linux 00:31:08.665 ************************************ 00:31:08.665 08:06:53 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:31:08.665 * Looking for test storage... 00:31:08.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:31:08.665 08:06:53 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:31:08.665 08:06:53 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:08.665 08:06:53 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:08.665 08:06:53 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:08.665 08:06:53 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:08.665 08:06:53 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.665 08:06:53 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.665 08:06:53 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.665 08:06:53 keyring_linux -- paths/export.sh@5 -- # export PATH 00:31:08.665 08:06:53 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:08.665 08:06:53 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:08.665 08:06:53 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:08.665 08:06:53 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:08.665 08:06:53 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:31:08.665 08:06:53 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:31:08.665 08:06:53 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:31:08.665 08:06:53 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:31:08.665 08:06:53 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:08.665 08:06:53 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:31:08.665 08:06:53 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:08.665 08:06:53 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:08.665 08:06:53 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:31:08.665 08:06:53 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:08.665 08:06:53 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:31:08.665 08:06:53 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:31:08.665 /tmp/:spdk-test:key0 00:31:08.665 08:06:53 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:31:08.665 08:06:53 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:08.665 08:06:53 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:31:08.665 08:06:53 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:08.665 08:06:53 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:08.665 08:06:53 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:31:08.665 08:06:53 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:08.665 08:06:53 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:08.665 08:06:53 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:31:08.665 08:06:53 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:31:08.665 /tmp/:spdk-test:key1 00:31:08.665 08:06:53 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3457201 00:31:08.665 08:06:53 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3457201 00:31:08.665 08:06:53 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:31:08.665 08:06:53 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3457201 ']' 00:31:08.665 08:06:53 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.665 08:06:53 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:08.665 08:06:53 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.665 08:06:53 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:08.665 08:06:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:08.923 [2024-07-15 08:06:53.430147] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:31:08.923 [2024-07-15 08:06:53.430192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3457201 ] 00:31:08.923 EAL: No free 2048 kB hugepages reported on node 1 00:31:08.923 [2024-07-15 08:06:53.494516] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.923 [2024-07-15 08:06:53.569094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:09.491 08:06:54 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:09.491 08:06:54 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:31:09.491 08:06:54 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:31:09.491 08:06:54 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.491 08:06:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:09.491 [2024-07-15 08:06:54.234818] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:09.749 null0 00:31:09.750 [2024-07-15 08:06:54.266872] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:09.750 [2024-07-15 08:06:54.267197] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:09.750 08:06:54 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.750 08:06:54 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:31:09.750 916476673 00:31:09.750 08:06:54 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:31:09.750 41780907 00:31:09.750 08:06:54 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3457426 00:31:09.750 08:06:54 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:31:09.750 08:06:54 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3457426 /var/tmp/bperf.sock 00:31:09.750 08:06:54 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3457426 ']' 00:31:09.750 08:06:54 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:09.750 08:06:54 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:09.750 08:06:54 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:09.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:09.750 08:06:54 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:09.750 08:06:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:09.750 [2024-07-15 08:06:54.334815] Starting SPDK v24.09-pre git sha1 897e912d5 / DPDK 24.03.0 initialization... 00:31:09.750 [2024-07-15 08:06:54.334857] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3457426 ] 00:31:09.750 EAL: No free 2048 kB hugepages reported on node 1 00:31:09.750 [2024-07-15 08:06:54.402329] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.750 [2024-07-15 08:06:54.481696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:10.687 08:06:55 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:10.687 08:06:55 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:31:10.687 08:06:55 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:31:10.687 08:06:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:31:10.687 08:06:55 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:31:10.687 08:06:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:10.946 08:06:55 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:10.946 08:06:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:10.946 [2024-07-15 08:06:55.685127] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:11.206 nvme0n1 00:31:11.206 08:06:55 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:31:11.206 08:06:55 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:31:11.206 08:06:55 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:11.206 08:06:55 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:11.206 08:06:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:11.206 08:06:55 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:11.465 08:06:55 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:31:11.465 08:06:55 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:11.465 08:06:55 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:31:11.465 08:06:55 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:31:11.465 08:06:55 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:11.465 08:06:55 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:31:11.465 08:06:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:11.465 08:06:56 keyring_linux -- keyring/linux.sh@25 -- # sn=916476673 00:31:11.465 08:06:56 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:31:11.465 08:06:56 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:11.465 08:06:56 keyring_linux -- keyring/linux.sh@26 -- # [[ 916476673 == \9\1\6\4\7\6\6\7\3 ]] 00:31:11.465 08:06:56 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 916476673 00:31:11.465 08:06:56 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:31:11.465 08:06:56 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:11.724 Running I/O for 1 seconds... 00:31:12.661 00:31:12.661 Latency(us) 00:31:12.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.661 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:12.661 nvme0n1 : 1.01 17941.99 70.09 0.00 0.00 7104.29 5698.78 15386.71 00:31:12.661 =================================================================================================================== 00:31:12.661 Total : 17941.99 70.09 0.00 0.00 7104.29 5698.78 15386.71 00:31:12.661 0 00:31:12.661 08:06:57 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:12.661 08:06:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:12.919 08:06:57 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:31:12.919 08:06:57 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:31:12.919 08:06:57 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:12.919 08:06:57 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:12.919 08:06:57 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:12.919 08:06:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:12.919 08:06:57 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:31:12.919 08:06:57 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:12.919 08:06:57 keyring_linux -- keyring/linux.sh@23 -- # return 00:31:12.919 08:06:57 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:12.919 08:06:57 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:31:12.919 08:06:57 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:12.919 08:06:57 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:12.919 08:06:57 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:12.919 08:06:57 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:12.919 08:06:57 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:12.919 08:06:57 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:12.919 08:06:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:13.178 [2024-07-15 08:06:57.807146] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:13.178 [2024-07-15 08:06:57.807421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a3fd0 (107): Transport endpoint is not connected 00:31:13.178 [2024-07-15 08:06:57.808416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a3fd0 (9): Bad file descriptor 00:31:13.178 [2024-07-15 08:06:57.809417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:13.178 [2024-07-15 08:06:57.809434] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:13.178 [2024-07-15 08:06:57.809441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:13.178 request: 00:31:13.178 { 00:31:13.178 "name": "nvme0", 00:31:13.178 "trtype": "tcp", 00:31:13.178 "traddr": "127.0.0.1", 00:31:13.178 "adrfam": "ipv4", 00:31:13.178 "trsvcid": "4420", 00:31:13.178 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:13.178 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:13.178 "prchk_reftag": false, 00:31:13.178 "prchk_guard": false, 00:31:13.178 "hdgst": false, 00:31:13.178 "ddgst": false, 00:31:13.178 "psk": ":spdk-test:key1", 00:31:13.178 "method": "bdev_nvme_attach_controller", 00:31:13.178 "req_id": 1 00:31:13.178 } 00:31:13.178 Got JSON-RPC error response 00:31:13.178 response: 00:31:13.178 { 00:31:13.178 "code": -5, 00:31:13.178 "message": "Input/output error" 00:31:13.178 } 00:31:13.178 08:06:57 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:31:13.178 08:06:57 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:13.178 08:06:57 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:13.178 08:06:57 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:13.178 08:06:57 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:31:13.178 08:06:57 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:13.178 08:06:57 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:31:13.178 08:06:57 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:31:13.178 08:06:57 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:31:13.178 08:06:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:13.178 08:06:57 keyring_linux -- keyring/linux.sh@33 -- # sn=916476673 00:31:13.178 08:06:57 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 916476673 00:31:13.178 1 links removed 00:31:13.178 08:06:57 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:13.178 08:06:57 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:31:13.178 08:06:57 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:31:13.178 08:06:57 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:31:13.178 08:06:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:31:13.178 08:06:57 keyring_linux -- keyring/linux.sh@33 -- # sn=41780907 00:31:13.178 08:06:57 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 41780907 00:31:13.178 1 links removed 00:31:13.178 08:06:57 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3457426 00:31:13.178 08:06:57 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3457426 ']' 00:31:13.178 08:06:57 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3457426 00:31:13.178 08:06:57 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:31:13.178 08:06:57 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:13.178 08:06:57 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3457426 00:31:13.178 08:06:57 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:13.178 08:06:57 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:13.178 08:06:57 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3457426' 00:31:13.178 killing process with pid 3457426 00:31:13.178 08:06:57 keyring_linux -- common/autotest_common.sh@967 -- # kill 3457426 00:31:13.178 Received shutdown signal, test time was about 1.000000 seconds 00:31:13.178 00:31:13.178 Latency(us) 00:31:13.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:13.178 =================================================================================================================== 00:31:13.178 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:13.178 08:06:57 keyring_linux -- common/autotest_common.sh@972 -- # wait 3457426 00:31:13.437 08:06:58 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3457201 00:31:13.437 08:06:58 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3457201 ']' 00:31:13.437 08:06:58 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3457201 00:31:13.437 08:06:58 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:31:13.437 08:06:58 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:13.437 08:06:58 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3457201 00:31:13.437 08:06:58 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:13.437 08:06:58 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:13.437 08:06:58 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3457201' 00:31:13.437 killing process with pid 3457201 00:31:13.437 08:06:58 keyring_linux -- common/autotest_common.sh@967 -- # kill 3457201 00:31:13.437 08:06:58 keyring_linux -- common/autotest_common.sh@972 -- # wait 3457201 00:31:13.728 00:31:13.728 real 0m5.243s 00:31:13.728 user 0m9.480s 00:31:13.728 sys 0m1.514s 00:31:13.728 08:06:58 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:13.729 08:06:58 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:13.729 ************************************ 00:31:13.729 END TEST keyring_linux 00:31:13.729 ************************************ 00:31:13.729 08:06:58 -- common/autotest_common.sh@1142 -- # return 0 00:31:13.729 08:06:58 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:31:13.729 08:06:58 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:31:13.729 08:06:58 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:31:13.729 08:06:58 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:31:13.729 08:06:58 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:31:13.729 08:06:58 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:31:13.729 08:06:58 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:31:13.729 08:06:58 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:13.729 08:06:58 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:31:13.729 08:06:58 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:31:13.729 08:06:58 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:31:13.729 08:06:58 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:31:13.729 08:06:58 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:13.729 08:06:58 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:13.729 08:06:58 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:31:13.729 08:06:58 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:31:13.729 08:06:58 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:31:13.729 08:06:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:13.729 08:06:58 -- common/autotest_common.sh@10 -- # set +x 00:31:13.729 08:06:58 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:31:13.729 08:06:58 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:31:13.729 08:06:58 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:31:13.729 08:06:58 -- common/autotest_common.sh@10 -- # set +x 00:31:19.001 INFO: APP EXITING 00:31:19.001 INFO: killing all VMs 00:31:19.001 INFO: killing vhost app 00:31:19.001 INFO: EXIT DONE 00:31:21.537 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:31:21.537 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:31:21.537 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:31:21.537 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:31:21.537 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:31:21.537 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:31:21.537 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:31:21.537 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:31:21.537 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:31:21.537 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:31:21.537 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:31:21.537 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:31:21.537 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:31:21.537 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:31:21.537 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:31:21.796 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:31:21.796 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:31:24.331 Cleaning 00:31:24.331 Removing: /var/run/dpdk/spdk0/config 00:31:24.331 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:24.592 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:24.592 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:24.592 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:24.592 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:31:24.592 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:31:24.592 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:31:24.592 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:31:24.592 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:24.592 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:24.592 Removing: /var/run/dpdk/spdk1/config 00:31:24.592 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:24.592 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:24.592 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:24.592 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:24.592 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:31:24.592 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:31:24.592 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:31:24.592 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:31:24.592 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:24.592 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:24.592 Removing: /var/run/dpdk/spdk1/mp_socket 00:31:24.592 Removing: /var/run/dpdk/spdk2/config 00:31:24.592 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:24.592 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:24.592 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:24.592 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:24.592 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:31:24.592 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:31:24.592 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:31:24.592 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:31:24.592 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:24.592 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:24.592 Removing: /var/run/dpdk/spdk3/config 00:31:24.592 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:24.592 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:24.592 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:24.592 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:24.592 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:31:24.592 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:31:24.592 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:31:24.592 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:31:24.592 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:24.592 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:24.592 Removing: /var/run/dpdk/spdk4/config 00:31:24.592 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:24.592 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:24.592 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:24.592 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:24.592 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:31:24.592 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:31:24.592 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:31:24.592 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:31:24.592 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:24.592 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:24.592 Removing: /dev/shm/bdev_svc_trace.1 00:31:24.592 Removing: /dev/shm/nvmf_trace.0 00:31:24.592 Removing: /dev/shm/spdk_tgt_trace.pid3068248 00:31:24.592 Removing: /var/run/dpdk/spdk0 00:31:24.592 Removing: /var/run/dpdk/spdk1 00:31:24.592 Removing: /var/run/dpdk/spdk2 00:31:24.592 Removing: /var/run/dpdk/spdk3 00:31:24.592 Removing: /var/run/dpdk/spdk4 00:31:24.592 Removing: /var/run/dpdk/spdk_pid2925349 00:31:24.592 Removing: /var/run/dpdk/spdk_pid3066122 00:31:24.592 Removing: /var/run/dpdk/spdk_pid3067185 00:31:24.592 Removing: /var/run/dpdk/spdk_pid3068248 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3068883 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3069829 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3070076 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3071045 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3071275 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3071460 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3073118 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3074388 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3074671 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3074960 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3075267 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3075554 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3075805 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3076061 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3076336 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3077076 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3080144 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3080416 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3080810 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3080826 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3081317 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3081546 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3081861 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3082046 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3082307 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3082507 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3082598 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3082814 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3083361 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3083576 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3083898 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3084169 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3084202 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3084270 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3084586 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3084921 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3085189 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3085466 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3085737 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3086067 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3086626 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3086933 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3087217 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3087486 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3087741 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3087995 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3088242 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3088493 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3088740 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3088992 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3089249 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3089497 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3089753 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3090001 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3090091 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3090570 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3094246 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3138353 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3142813 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3152868 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3158243 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3162084 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3162739 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3168961 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3174981 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3174983 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3175898 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3176810 00:31:24.852 Removing: /var/run/dpdk/spdk_pid3177727 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3178200 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3178202 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3178437 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3178658 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3178664 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3179602 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3180541 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3181721 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3182392 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3182396 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3182632 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3183876 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3184947 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3193353 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3193649 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3197899 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3203775 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3206377 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3216784 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3225994 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3228032 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3228956 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3245769 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3249639 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3275368 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3279868 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3281467 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3283311 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3283543 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3283787 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3284022 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3284531 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3286403 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3287449 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3288023 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3290178 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3290903 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3291586 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3295677 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3305628 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3310179 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3316173 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3317518 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3319015 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3323313 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3327540 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3335023 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3335121 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3339614 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3339842 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3340077 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3340530 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3340542 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3345015 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3345581 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3349940 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3352702 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3358594 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3364155 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3372914 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3380134 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3380136 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3398211 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3398902 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3399598 00:31:25.113 Removing: /var/run/dpdk/spdk_pid3400117 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3401051 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3401751 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3402551 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3403058 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3407800 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3408103 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3413999 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3414262 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3416486 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3424444 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3424450 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3429491 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3431459 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3433427 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3434555 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3436572 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3437718 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3446579 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3447358 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3448102 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3450370 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3450834 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3451300 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3455115 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3455218 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3456842 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3457201 00:31:25.373 Removing: /var/run/dpdk/spdk_pid3457426 00:31:25.373 Clean 00:31:25.373 08:07:10 -- common/autotest_common.sh@1451 -- # return 0 00:31:25.373 08:07:10 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:31:25.373 08:07:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:25.373 08:07:10 -- common/autotest_common.sh@10 -- # set +x 00:31:25.373 08:07:10 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:31:25.373 08:07:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:25.373 08:07:10 -- common/autotest_common.sh@10 -- # set +x 00:31:25.632 08:07:10 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:25.632 08:07:10 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:31:25.633 08:07:10 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:31:25.633 08:07:10 -- spdk/autotest.sh@391 -- # hash lcov 00:31:25.633 08:07:10 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:25.633 08:07:10 -- spdk/autotest.sh@393 -- # hostname 00:31:25.633 08:07:10 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:31:25.633 geninfo: WARNING: invalid characters removed from testname! 00:31:47.564 08:07:30 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:48.501 08:07:33 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:50.401 08:07:34 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:52.307 08:07:36 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:54.211 08:07:38 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:55.637 08:07:40 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:57.539 08:07:42 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:57.539 08:07:42 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:57.539 08:07:42 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:57.539 08:07:42 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:57.539 08:07:42 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:57.540 08:07:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.540 08:07:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.540 08:07:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.540 08:07:42 -- paths/export.sh@5 -- $ export PATH 00:31:57.540 08:07:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.540 08:07:42 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:31:57.540 08:07:42 -- common/autobuild_common.sh@444 -- $ date +%s 00:31:57.540 08:07:42 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721023662.XXXXXX 00:31:57.540 08:07:42 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721023662.gWNaxm 00:31:57.540 08:07:42 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:31:57.540 08:07:42 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:31:57.540 08:07:42 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:31:57.540 08:07:42 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:31:57.540 08:07:42 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:31:57.540 08:07:42 -- common/autobuild_common.sh@460 -- $ get_config_params 00:31:57.540 08:07:42 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:31:57.540 08:07:42 -- common/autotest_common.sh@10 -- $ set +x 00:31:57.540 08:07:42 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:31:57.540 08:07:42 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:31:57.540 08:07:42 -- pm/common@17 -- $ local monitor 00:31:57.540 08:07:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:57.540 08:07:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:57.540 08:07:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:57.540 08:07:42 -- pm/common@21 -- $ date +%s 00:31:57.540 08:07:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:57.540 08:07:42 -- pm/common@21 -- $ date +%s 00:31:57.540 08:07:42 -- pm/common@25 -- $ sleep 1 00:31:57.540 08:07:42 -- pm/common@21 -- $ date +%s 00:31:57.540 08:07:42 -- pm/common@21 -- $ date +%s 00:31:57.540 08:07:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721023662 00:31:57.540 08:07:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721023662 00:31:57.540 08:07:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721023662 00:31:57.540 08:07:42 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721023662 00:31:57.540 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721023662_collect-vmstat.pm.log 00:31:57.540 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721023662_collect-cpu-load.pm.log 00:31:57.540 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721023662_collect-cpu-temp.pm.log 00:31:57.540 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721023662_collect-bmc-pm.bmc.pm.log 00:31:58.476 08:07:43 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:31:58.476 08:07:43 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:31:58.476 08:07:43 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:58.476 08:07:43 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:58.476 08:07:43 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:58.476 08:07:43 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:58.476 08:07:43 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:58.476 08:07:43 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:58.476 08:07:43 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:58.476 08:07:43 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:58.476 08:07:43 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:58.476 08:07:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:58.476 08:07:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:58.476 08:07:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:58.476 08:07:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:31:58.476 08:07:43 -- pm/common@44 -- $ pid=3467708 00:31:58.476 08:07:43 -- pm/common@50 -- $ kill -TERM 3467708 00:31:58.476 08:07:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:58.476 08:07:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:31:58.476 08:07:43 -- pm/common@44 -- $ pid=3467710 00:31:58.476 08:07:43 -- pm/common@50 -- $ kill -TERM 3467710 00:31:58.476 08:07:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:58.476 08:07:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:31:58.476 08:07:43 -- pm/common@44 -- $ pid=3467711 00:31:58.476 08:07:43 -- pm/common@50 -- $ kill -TERM 3467711 00:31:58.476 08:07:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:58.734 08:07:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:31:58.734 08:07:43 -- pm/common@44 -- $ pid=3467734 00:31:58.734 08:07:43 -- pm/common@50 -- $ sudo -E kill -TERM 3467734 00:31:58.734 + [[ -n 2961154 ]] 00:31:58.734 + sudo kill 2961154 00:31:58.744 [Pipeline] } 00:31:58.766 [Pipeline] // stage 00:31:58.771 [Pipeline] } 00:31:58.787 [Pipeline] // timeout 00:31:58.792 [Pipeline] } 00:31:58.808 [Pipeline] // catchError 00:31:58.813 [Pipeline] } 00:31:58.830 [Pipeline] // wrap 00:31:58.837 [Pipeline] } 00:31:58.853 [Pipeline] // catchError 00:31:58.863 [Pipeline] stage 00:31:58.865 [Pipeline] { (Epilogue) 00:31:58.881 [Pipeline] catchError 00:31:58.882 [Pipeline] { 00:31:58.898 [Pipeline] echo 00:31:58.899 Cleanup processes 00:31:58.905 [Pipeline] sh 00:31:59.187 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:59.187 3467824 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:31:59.187 3468109 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:59.204 [Pipeline] sh 00:31:59.490 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:59.490 ++ grep -v 'sudo pgrep' 00:31:59.490 ++ awk '{print $1}' 00:31:59.490 + sudo kill -9 3467824 00:31:59.502 [Pipeline] sh 00:31:59.784 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:09.771 [Pipeline] sh 00:32:10.053 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:10.053 Artifacts sizes are good 00:32:10.068 [Pipeline] archiveArtifacts 00:32:10.075 Archiving artifacts 00:32:10.238 [Pipeline] sh 00:32:10.522 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:32:10.540 [Pipeline] cleanWs 00:32:10.551 [WS-CLEANUP] Deleting project workspace... 00:32:10.551 [WS-CLEANUP] Deferred wipeout is used... 00:32:10.557 [WS-CLEANUP] done 00:32:10.560 [Pipeline] } 00:32:10.577 [Pipeline] // catchError 00:32:10.586 [Pipeline] sh 00:32:10.864 + logger -p user.info -t JENKINS-CI 00:32:10.875 [Pipeline] } 00:32:10.896 [Pipeline] // stage 00:32:10.902 [Pipeline] } 00:32:10.923 [Pipeline] // node 00:32:10.933 [Pipeline] End of Pipeline 00:32:10.970 Finished: SUCCESS